Test Report: KVM_Linux_crio 17779

                    
                      a62a866ace620084e399351c2e47ff577ac5512f:2024-06-12:34866
                    
                

Test fail (31/312)

Order failed test Duration
30 TestAddons/parallel/Ingress 153.15
32 TestAddons/parallel/MetricsServer 318.91
45 TestAddons/StoppedEnableDisable 154.35
148 TestFunctional/parallel/ImageCommands/ImageRemove 2.73
164 TestMultiControlPlane/serial/StopSecondaryNode 141.94
166 TestMultiControlPlane/serial/RestartSecondaryNode 60.98
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 400.8
171 TestMultiControlPlane/serial/StopCluster 141.79
231 TestMultiNode/serial/RestartKeepsNodes 308.54
233 TestMultiNode/serial/StopMultiNode 141.3
240 TestPreload 213.85
248 TestKubernetesUpgrade 400.1
282 TestPause/serial/SecondStartNoReconfiguration 50.97
315 TestStartStop/group/old-k8s-version/serial/FirstStart 279.07
340 TestStartStop/group/no-preload/serial/Stop 139.04
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.09
345 TestStartStop/group/embed-certs/serial/Stop 139.02
346 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
347 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 96.46
348 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
352 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/SecondStart 765.97
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.23
358 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.25
359 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.28
360 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.43
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 435.98
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 369.75
363 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 280.19
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 116.37
x
+
TestAddons/parallel/Ingress (153.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-899843 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-899843 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-899843 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [63c525be-66b7-432d-b1ae-2f835c9880fb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [63c525be-66b7-432d-b1ae-2f835c9880fb] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00473507s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-899843 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.444826666s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-899843 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.248
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-899843 addons disable ingress --alsologtostderr -v=1: (7.671125302s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-899843 -n addons-899843
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-899843 logs -n 25: (1.274515574s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-740695 | jenkins | v1.33.1 | 12 Jun 24 20:11 UTC |                     |
	|         | -p download-only-740695                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:12 UTC |
	| delete  | -p download-only-740695                                                                     | download-only-740695 | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:12 UTC |
	| delete  | -p download-only-691398                                                                     | download-only-691398 | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:12 UTC |
	| delete  | -p download-only-740695                                                                     | download-only-740695 | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-323011 | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC |                     |
	|         | binary-mirror-323011                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40201                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-323011                                                                     | binary-mirror-323011 | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:12 UTC |
	| addons  | disable dashboard -p                                                                        | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC |                     |
	|         | addons-899843                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC |                     |
	|         | addons-899843                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-899843 --wait=true                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:14 UTC | 12 Jun 24 20:14 UTC |
	|         | -p addons-899843                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-899843 addons disable                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-899843 ip                                                                            | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	| addons  | addons-899843 addons disable                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | -p addons-899843                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | addons-899843                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | addons-899843                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-899843 ssh cat                                                                       | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | /opt/local-path-provisioner/pvc-0b5a2113-5bb0-41c3-b569-15c053bb7f98_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-899843 addons disable                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:16 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-899843 ssh curl -s                                                                   | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-899843 addons                                                                        | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:16 UTC | 12 Jun 24 20:16 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-899843 addons                                                                        | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:16 UTC | 12 Jun 24 20:16 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-899843 ip                                                                            | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:17 UTC | 12 Jun 24 20:17 UTC |
	| addons  | addons-899843 addons disable                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:17 UTC | 12 Jun 24 20:17 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-899843 addons disable                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:17 UTC | 12 Jun 24 20:17 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 20:12:27
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 20:12:27.664136   22294 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:12:27.664256   22294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:12:27.664265   22294 out.go:304] Setting ErrFile to fd 2...
	I0612 20:12:27.664270   22294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:12:27.664477   22294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:12:27.665096   22294 out.go:298] Setting JSON to false
	I0612 20:12:27.665957   22294 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3293,"bootTime":1718219855,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 20:12:27.666014   22294 start.go:139] virtualization: kvm guest
	I0612 20:12:27.668220   22294 out.go:177] * [addons-899843] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 20:12:27.669642   22294 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 20:12:27.669592   22294 notify.go:220] Checking for updates...
	I0612 20:12:27.671320   22294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 20:12:27.672719   22294 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:12:27.674240   22294 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:12:27.675696   22294 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 20:12:27.677151   22294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 20:12:27.678737   22294 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 20:12:27.710480   22294 out.go:177] * Using the kvm2 driver based on user configuration
	I0612 20:12:27.711863   22294 start.go:297] selected driver: kvm2
	I0612 20:12:27.711878   22294 start.go:901] validating driver "kvm2" against <nil>
	I0612 20:12:27.711888   22294 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 20:12:27.712578   22294 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:12:27.712637   22294 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 20:12:27.728064   22294 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 20:12:27.728115   22294 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 20:12:27.728322   22294 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:12:27.728381   22294 cni.go:84] Creating CNI manager for ""
	I0612 20:12:27.728393   22294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 20:12:27.728401   22294 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 20:12:27.728444   22294 start.go:340] cluster config:
	{Name:addons-899843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-899843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:12:27.728560   22294 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:12:27.731168   22294 out.go:177] * Starting "addons-899843" primary control-plane node in "addons-899843" cluster
	I0612 20:12:27.732494   22294 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:12:27.732537   22294 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 20:12:27.732552   22294 cache.go:56] Caching tarball of preloaded images
	I0612 20:12:27.732627   22294 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 20:12:27.732640   22294 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 20:12:27.732929   22294 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/config.json ...
	I0612 20:12:27.732956   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/config.json: {Name:mk0814d0dfa3d865c35e7e0ab42305e7a784a00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:27.733109   22294 start.go:360] acquireMachinesLock for addons-899843: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 20:12:27.733172   22294 start.go:364] duration metric: took 46.572µs to acquireMachinesLock for "addons-899843"
	I0612 20:12:27.733194   22294 start.go:93] Provisioning new machine with config: &{Name:addons-899843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-899843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:12:27.733282   22294 start.go:125] createHost starting for "" (driver="kvm2")
	I0612 20:12:27.735038   22294 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0612 20:12:27.735222   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:12:27.735273   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:12:27.749372   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41263
	I0612 20:12:27.749812   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:12:27.750520   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:12:27.750541   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:12:27.750879   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:12:27.751086   22294 main.go:141] libmachine: (addons-899843) Calling .GetMachineName
	I0612 20:12:27.751263   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:27.751426   22294 start.go:159] libmachine.API.Create for "addons-899843" (driver="kvm2")
	I0612 20:12:27.751454   22294 client.go:168] LocalClient.Create starting
	I0612 20:12:27.751497   22294 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 20:12:27.888279   22294 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 20:12:28.307942   22294 main.go:141] libmachine: Running pre-create checks...
	I0612 20:12:28.307965   22294 main.go:141] libmachine: (addons-899843) Calling .PreCreateCheck
	I0612 20:12:28.308473   22294 main.go:141] libmachine: (addons-899843) Calling .GetConfigRaw
	I0612 20:12:28.308921   22294 main.go:141] libmachine: Creating machine...
	I0612 20:12:28.308936   22294 main.go:141] libmachine: (addons-899843) Calling .Create
	I0612 20:12:28.309103   22294 main.go:141] libmachine: (addons-899843) Creating KVM machine...
	I0612 20:12:28.310432   22294 main.go:141] libmachine: (addons-899843) DBG | found existing default KVM network
	I0612 20:12:28.311157   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:28.310999   22316 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0612 20:12:28.311196   22294 main.go:141] libmachine: (addons-899843) DBG | created network xml: 
	I0612 20:12:28.311209   22294 main.go:141] libmachine: (addons-899843) DBG | <network>
	I0612 20:12:28.311223   22294 main.go:141] libmachine: (addons-899843) DBG |   <name>mk-addons-899843</name>
	I0612 20:12:28.311233   22294 main.go:141] libmachine: (addons-899843) DBG |   <dns enable='no'/>
	I0612 20:12:28.311237   22294 main.go:141] libmachine: (addons-899843) DBG |   
	I0612 20:12:28.311244   22294 main.go:141] libmachine: (addons-899843) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0612 20:12:28.311253   22294 main.go:141] libmachine: (addons-899843) DBG |     <dhcp>
	I0612 20:12:28.311259   22294 main.go:141] libmachine: (addons-899843) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0612 20:12:28.311264   22294 main.go:141] libmachine: (addons-899843) DBG |     </dhcp>
	I0612 20:12:28.311271   22294 main.go:141] libmachine: (addons-899843) DBG |   </ip>
	I0612 20:12:28.311278   22294 main.go:141] libmachine: (addons-899843) DBG |   
	I0612 20:12:28.311290   22294 main.go:141] libmachine: (addons-899843) DBG | </network>
	I0612 20:12:28.311299   22294 main.go:141] libmachine: (addons-899843) DBG | 
	I0612 20:12:28.316800   22294 main.go:141] libmachine: (addons-899843) DBG | trying to create private KVM network mk-addons-899843 192.168.39.0/24...
	I0612 20:12:28.377005   22294 main.go:141] libmachine: (addons-899843) DBG | private KVM network mk-addons-899843 192.168.39.0/24 created
	I0612 20:12:28.377034   22294 main.go:141] libmachine: (addons-899843) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843 ...
	I0612 20:12:28.377063   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:28.376982   22316 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:12:28.377086   22294 main.go:141] libmachine: (addons-899843) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 20:12:28.377137   22294 main.go:141] libmachine: (addons-899843) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 20:12:28.636099   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:28.635977   22316 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa...
	I0612 20:12:28.782667   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:28.782536   22316 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/addons-899843.rawdisk...
	I0612 20:12:28.782695   22294 main.go:141] libmachine: (addons-899843) DBG | Writing magic tar header
	I0612 20:12:28.782707   22294 main.go:141] libmachine: (addons-899843) DBG | Writing SSH key tar header
	I0612 20:12:28.782715   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:28.782645   22316 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843 ...
	I0612 20:12:28.782726   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843
	I0612 20:12:28.782797   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843 (perms=drwx------)
	I0612 20:12:28.782823   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 20:12:28.782831   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 20:12:28.782846   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 20:12:28.782857   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 20:12:28.782872   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 20:12:28.782881   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 20:12:28.782894   22294 main.go:141] libmachine: (addons-899843) Creating domain...
	I0612 20:12:28.782902   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:12:28.782914   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 20:12:28.782936   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 20:12:28.782959   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins
	I0612 20:12:28.782974   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home
	I0612 20:12:28.782989   22294 main.go:141] libmachine: (addons-899843) DBG | Skipping /home - not owner
	I0612 20:12:28.783882   22294 main.go:141] libmachine: (addons-899843) define libvirt domain using xml: 
	I0612 20:12:28.783909   22294 main.go:141] libmachine: (addons-899843) <domain type='kvm'>
	I0612 20:12:28.783916   22294 main.go:141] libmachine: (addons-899843)   <name>addons-899843</name>
	I0612 20:12:28.783925   22294 main.go:141] libmachine: (addons-899843)   <memory unit='MiB'>4000</memory>
	I0612 20:12:28.783948   22294 main.go:141] libmachine: (addons-899843)   <vcpu>2</vcpu>
	I0612 20:12:28.783966   22294 main.go:141] libmachine: (addons-899843)   <features>
	I0612 20:12:28.783993   22294 main.go:141] libmachine: (addons-899843)     <acpi/>
	I0612 20:12:28.784011   22294 main.go:141] libmachine: (addons-899843)     <apic/>
	I0612 20:12:28.784018   22294 main.go:141] libmachine: (addons-899843)     <pae/>
	I0612 20:12:28.784026   22294 main.go:141] libmachine: (addons-899843)     
	I0612 20:12:28.784031   22294 main.go:141] libmachine: (addons-899843)   </features>
	I0612 20:12:28.784036   22294 main.go:141] libmachine: (addons-899843)   <cpu mode='host-passthrough'>
	I0612 20:12:28.784041   22294 main.go:141] libmachine: (addons-899843)   
	I0612 20:12:28.784050   22294 main.go:141] libmachine: (addons-899843)   </cpu>
	I0612 20:12:28.784058   22294 main.go:141] libmachine: (addons-899843)   <os>
	I0612 20:12:28.784064   22294 main.go:141] libmachine: (addons-899843)     <type>hvm</type>
	I0612 20:12:28.784070   22294 main.go:141] libmachine: (addons-899843)     <boot dev='cdrom'/>
	I0612 20:12:28.784074   22294 main.go:141] libmachine: (addons-899843)     <boot dev='hd'/>
	I0612 20:12:28.784080   22294 main.go:141] libmachine: (addons-899843)     <bootmenu enable='no'/>
	I0612 20:12:28.784087   22294 main.go:141] libmachine: (addons-899843)   </os>
	I0612 20:12:28.784092   22294 main.go:141] libmachine: (addons-899843)   <devices>
	I0612 20:12:28.784099   22294 main.go:141] libmachine: (addons-899843)     <disk type='file' device='cdrom'>
	I0612 20:12:28.784113   22294 main.go:141] libmachine: (addons-899843)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/boot2docker.iso'/>
	I0612 20:12:28.784127   22294 main.go:141] libmachine: (addons-899843)       <target dev='hdc' bus='scsi'/>
	I0612 20:12:28.784143   22294 main.go:141] libmachine: (addons-899843)       <readonly/>
	I0612 20:12:28.784159   22294 main.go:141] libmachine: (addons-899843)     </disk>
	I0612 20:12:28.784172   22294 main.go:141] libmachine: (addons-899843)     <disk type='file' device='disk'>
	I0612 20:12:28.784181   22294 main.go:141] libmachine: (addons-899843)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 20:12:28.784190   22294 main.go:141] libmachine: (addons-899843)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/addons-899843.rawdisk'/>
	I0612 20:12:28.784198   22294 main.go:141] libmachine: (addons-899843)       <target dev='hda' bus='virtio'/>
	I0612 20:12:28.784203   22294 main.go:141] libmachine: (addons-899843)     </disk>
	I0612 20:12:28.784210   22294 main.go:141] libmachine: (addons-899843)     <interface type='network'>
	I0612 20:12:28.784219   22294 main.go:141] libmachine: (addons-899843)       <source network='mk-addons-899843'/>
	I0612 20:12:28.784234   22294 main.go:141] libmachine: (addons-899843)       <model type='virtio'/>
	I0612 20:12:28.784246   22294 main.go:141] libmachine: (addons-899843)     </interface>
	I0612 20:12:28.784254   22294 main.go:141] libmachine: (addons-899843)     <interface type='network'>
	I0612 20:12:28.784266   22294 main.go:141] libmachine: (addons-899843)       <source network='default'/>
	I0612 20:12:28.784275   22294 main.go:141] libmachine: (addons-899843)       <model type='virtio'/>
	I0612 20:12:28.784294   22294 main.go:141] libmachine: (addons-899843)     </interface>
	I0612 20:12:28.784307   22294 main.go:141] libmachine: (addons-899843)     <serial type='pty'>
	I0612 20:12:28.784326   22294 main.go:141] libmachine: (addons-899843)       <target port='0'/>
	I0612 20:12:28.784335   22294 main.go:141] libmachine: (addons-899843)     </serial>
	I0612 20:12:28.784346   22294 main.go:141] libmachine: (addons-899843)     <console type='pty'>
	I0612 20:12:28.784360   22294 main.go:141] libmachine: (addons-899843)       <target type='serial' port='0'/>
	I0612 20:12:28.784374   22294 main.go:141] libmachine: (addons-899843)     </console>
	I0612 20:12:28.784382   22294 main.go:141] libmachine: (addons-899843)     <rng model='virtio'>
	I0612 20:12:28.784389   22294 main.go:141] libmachine: (addons-899843)       <backend model='random'>/dev/random</backend>
	I0612 20:12:28.784398   22294 main.go:141] libmachine: (addons-899843)     </rng>
	I0612 20:12:28.784403   22294 main.go:141] libmachine: (addons-899843)     
	I0612 20:12:28.784407   22294 main.go:141] libmachine: (addons-899843)     
	I0612 20:12:28.784412   22294 main.go:141] libmachine: (addons-899843)   </devices>
	I0612 20:12:28.784418   22294 main.go:141] libmachine: (addons-899843) </domain>
	I0612 20:12:28.784425   22294 main.go:141] libmachine: (addons-899843) 
	I0612 20:12:28.790312   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:d6:2b:fa in network default
	I0612 20:12:28.790784   22294 main.go:141] libmachine: (addons-899843) Ensuring networks are active...
	I0612 20:12:28.790807   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:28.791382   22294 main.go:141] libmachine: (addons-899843) Ensuring network default is active
	I0612 20:12:28.791621   22294 main.go:141] libmachine: (addons-899843) Ensuring network mk-addons-899843 is active
	I0612 20:12:28.792048   22294 main.go:141] libmachine: (addons-899843) Getting domain xml...
	I0612 20:12:28.792635   22294 main.go:141] libmachine: (addons-899843) Creating domain...
	I0612 20:12:30.203186   22294 main.go:141] libmachine: (addons-899843) Waiting to get IP...
	I0612 20:12:30.204073   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:30.204483   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:30.204558   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:30.204488   22316 retry.go:31] will retry after 220.702949ms: waiting for machine to come up
	I0612 20:12:30.426917   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:30.427435   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:30.427461   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:30.427387   22316 retry.go:31] will retry after 336.04644ms: waiting for machine to come up
	I0612 20:12:30.765132   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:30.765585   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:30.765615   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:30.765551   22316 retry.go:31] will retry after 306.64442ms: waiting for machine to come up
	I0612 20:12:31.074156   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:31.074613   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:31.074643   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:31.074565   22316 retry.go:31] will retry after 510.553284ms: waiting for machine to come up
	I0612 20:12:31.586364   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:31.586793   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:31.586815   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:31.586749   22316 retry.go:31] will retry after 613.530836ms: waiting for machine to come up
	I0612 20:12:32.201589   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:32.202102   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:32.202126   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:32.202052   22316 retry.go:31] will retry after 574.741292ms: waiting for machine to come up
	I0612 20:12:32.778584   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:32.779073   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:32.779096   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:32.779008   22316 retry.go:31] will retry after 725.270321ms: waiting for machine to come up
	I0612 20:12:33.505767   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:33.506097   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:33.506123   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:33.506041   22316 retry.go:31] will retry after 1.392184112s: waiting for machine to come up
	I0612 20:12:34.900331   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:34.900741   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:34.900770   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:34.900721   22316 retry.go:31] will retry after 1.491312427s: waiting for machine to come up
	I0612 20:12:36.394363   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:36.394776   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:36.394803   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:36.394733   22316 retry.go:31] will retry after 2.066052302s: waiting for machine to come up
	I0612 20:12:38.462083   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:38.462507   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:38.462530   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:38.462454   22316 retry.go:31] will retry after 2.034306402s: waiting for machine to come up
	I0612 20:12:40.499615   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:40.500147   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:40.500171   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:40.500107   22316 retry.go:31] will retry after 2.283056423s: waiting for machine to come up
	I0612 20:12:42.785089   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:42.785491   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:42.785518   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:42.785434   22316 retry.go:31] will retry after 2.756143171s: waiting for machine to come up
	I0612 20:12:45.545347   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:45.545880   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:45.545903   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:45.545815   22316 retry.go:31] will retry after 4.896758392s: waiting for machine to come up
	I0612 20:12:50.445545   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.446012   22294 main.go:141] libmachine: (addons-899843) Found IP for machine: 192.168.39.248
	I0612 20:12:50.446029   22294 main.go:141] libmachine: (addons-899843) Reserving static IP address...
	I0612 20:12:50.446037   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has current primary IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.446468   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find host DHCP lease matching {name: "addons-899843", mac: "52:54:00:58:9b:d7", ip: "192.168.39.248"} in network mk-addons-899843
	I0612 20:12:50.516055   22294 main.go:141] libmachine: (addons-899843) DBG | Getting to WaitForSSH function...
	I0612 20:12:50.516085   22294 main.go:141] libmachine: (addons-899843) Reserved static IP address: 192.168.39.248
	I0612 20:12:50.516098   22294 main.go:141] libmachine: (addons-899843) Waiting for SSH to be available...
	I0612 20:12:50.518668   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.519236   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:minikube Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:50.519260   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.519374   22294 main.go:141] libmachine: (addons-899843) DBG | Using SSH client type: external
	I0612 20:12:50.519398   22294 main.go:141] libmachine: (addons-899843) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa (-rw-------)
	I0612 20:12:50.519429   22294 main.go:141] libmachine: (addons-899843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 20:12:50.519448   22294 main.go:141] libmachine: (addons-899843) DBG | About to run SSH command:
	I0612 20:12:50.519476   22294 main.go:141] libmachine: (addons-899843) DBG | exit 0
	I0612 20:12:50.651104   22294 main.go:141] libmachine: (addons-899843) DBG | SSH cmd err, output: <nil>: 
	I0612 20:12:50.651316   22294 main.go:141] libmachine: (addons-899843) KVM machine creation complete!
	I0612 20:12:50.651664   22294 main.go:141] libmachine: (addons-899843) Calling .GetConfigRaw
	I0612 20:12:50.652233   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:50.652433   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:50.652613   22294 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 20:12:50.652632   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:12:50.654163   22294 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 20:12:50.654177   22294 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 20:12:50.654193   22294 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 20:12:50.654199   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:50.656495   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.656845   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:50.656872   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.656965   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:50.657193   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.657348   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.657457   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:50.657598   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:50.657812   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:50.657825   22294 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 20:12:50.758727   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:12:50.758762   22294 main.go:141] libmachine: Detecting the provisioner...
	I0612 20:12:50.758771   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:50.761351   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.761732   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:50.761757   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.761950   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:50.762152   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.762272   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.762381   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:50.762643   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:50.762833   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:50.762846   22294 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 20:12:50.864040   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 20:12:50.864109   22294 main.go:141] libmachine: found compatible host: buildroot
	I0612 20:12:50.864117   22294 main.go:141] libmachine: Provisioning with buildroot...
	I0612 20:12:50.864131   22294 main.go:141] libmachine: (addons-899843) Calling .GetMachineName
	I0612 20:12:50.864401   22294 buildroot.go:166] provisioning hostname "addons-899843"
	I0612 20:12:50.864426   22294 main.go:141] libmachine: (addons-899843) Calling .GetMachineName
	I0612 20:12:50.864646   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:50.867751   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.868206   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:50.868230   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.868395   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:50.868589   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.868761   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.868899   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:50.869054   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:50.869215   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:50.869227   22294 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-899843 && echo "addons-899843" | sudo tee /etc/hostname
	I0612 20:12:50.987938   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-899843
	
	I0612 20:12:50.987968   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:50.990308   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.990587   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:50.990607   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.990758   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:50.991055   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.991227   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.991386   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:50.991566   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:50.991762   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:50.991787   22294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-899843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-899843/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-899843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 20:12:51.103699   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:12:51.103725   22294 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 20:12:51.103759   22294 buildroot.go:174] setting up certificates
	I0612 20:12:51.103770   22294 provision.go:84] configureAuth start
	I0612 20:12:51.103778   22294 main.go:141] libmachine: (addons-899843) Calling .GetMachineName
	I0612 20:12:51.104072   22294 main.go:141] libmachine: (addons-899843) Calling .GetIP
	I0612 20:12:51.106750   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.107071   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.107119   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.107253   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.109229   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.109584   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.109612   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.109733   22294 provision.go:143] copyHostCerts
	I0612 20:12:51.109815   22294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 20:12:51.109938   22294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 20:12:51.109999   22294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 20:12:51.110043   22294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.addons-899843 san=[127.0.0.1 192.168.39.248 addons-899843 localhost minikube]
	I0612 20:12:51.255476   22294 provision.go:177] copyRemoteCerts
	I0612 20:12:51.255529   22294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 20:12:51.255550   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.257967   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.258321   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.258346   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.258512   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.258693   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.258881   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.259034   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:12:51.343093   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 20:12:51.366909   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0612 20:12:51.390260   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 20:12:51.414942   22294 provision.go:87] duration metric: took 311.160243ms to configureAuth
	I0612 20:12:51.414968   22294 buildroot.go:189] setting minikube options for container-runtime
	I0612 20:12:51.415194   22294 config.go:182] Loaded profile config "addons-899843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:12:51.415279   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.417934   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.418359   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.418389   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.418608   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.418805   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.418962   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.419085   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.419248   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:51.419412   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:51.419426   22294 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 20:12:51.692326   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 20:12:51.692357   22294 main.go:141] libmachine: Checking connection to Docker...
	I0612 20:12:51.692370   22294 main.go:141] libmachine: (addons-899843) Calling .GetURL
	I0612 20:12:51.693519   22294 main.go:141] libmachine: (addons-899843) DBG | Using libvirt version 6000000
	I0612 20:12:51.695764   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.696100   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.696126   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.696270   22294 main.go:141] libmachine: Docker is up and running!
	I0612 20:12:51.696289   22294 main.go:141] libmachine: Reticulating splines...
	I0612 20:12:51.696297   22294 client.go:171] duration metric: took 23.944833507s to LocalClient.Create
	I0612 20:12:51.696319   22294 start.go:167] duration metric: took 23.9448957s to libmachine.API.Create "addons-899843"
	I0612 20:12:51.696337   22294 start.go:293] postStartSetup for "addons-899843" (driver="kvm2")
	I0612 20:12:51.696348   22294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 20:12:51.696363   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:51.696580   22294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 20:12:51.696603   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.698554   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.698898   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.698922   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.699050   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.699243   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.699407   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.699537   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:12:51.782216   22294 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 20:12:51.786528   22294 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 20:12:51.786550   22294 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 20:12:51.786640   22294 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 20:12:51.786678   22294 start.go:296] duration metric: took 90.333793ms for postStartSetup
	I0612 20:12:51.786716   22294 main.go:141] libmachine: (addons-899843) Calling .GetConfigRaw
	I0612 20:12:51.787304   22294 main.go:141] libmachine: (addons-899843) Calling .GetIP
	I0612 20:12:51.789850   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.790157   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.790187   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.790380   22294 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/config.json ...
	I0612 20:12:51.790542   22294 start.go:128] duration metric: took 24.057249977s to createHost
	I0612 20:12:51.790563   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.792358   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.792654   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.792678   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.792826   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.793036   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.793186   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.793358   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.793548   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:51.793734   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:51.793745   22294 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 20:12:51.896378   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718223171.863565953
	
	I0612 20:12:51.896403   22294 fix.go:216] guest clock: 1718223171.863565953
	I0612 20:12:51.896416   22294 fix.go:229] Guest: 2024-06-12 20:12:51.863565953 +0000 UTC Remote: 2024-06-12 20:12:51.790553747 +0000 UTC m=+24.159727019 (delta=73.012206ms)
	I0612 20:12:51.896443   22294 fix.go:200] guest clock delta is within tolerance: 73.012206ms
	I0612 20:12:51.896450   22294 start.go:83] releasing machines lock for "addons-899843", held for 24.163267679s
	I0612 20:12:51.896476   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:51.896733   22294 main.go:141] libmachine: (addons-899843) Calling .GetIP
	I0612 20:12:51.899507   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.899923   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.899954   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.900123   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:51.900722   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:51.900910   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:51.901063   22294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 20:12:51.901132   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.901155   22294 ssh_runner.go:195] Run: cat /version.json
	I0612 20:12:51.901181   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.903467   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.903820   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.903852   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.903881   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.903948   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.904116   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.904261   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.904275   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.904282   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.904407   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.904600   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.904594   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:12:51.904759   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.904900   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:12:51.980538   22294 ssh_runner.go:195] Run: systemctl --version
	I0612 20:12:52.005563   22294 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 20:12:52.166193   22294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 20:12:52.175084   22294 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 20:12:52.175159   22294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 20:12:52.193412   22294 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 20:12:52.193433   22294 start.go:494] detecting cgroup driver to use...
	I0612 20:12:52.193496   22294 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 20:12:52.211891   22294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 20:12:52.226647   22294 docker.go:217] disabling cri-docker service (if available) ...
	I0612 20:12:52.226711   22294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 20:12:52.240593   22294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 20:12:52.254096   22294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 20:12:52.367130   22294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 20:12:52.508630   22294 docker.go:233] disabling docker service ...
	I0612 20:12:52.508702   22294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 20:12:52.523339   22294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 20:12:52.536917   22294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 20:12:52.680583   22294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 20:12:52.799487   22294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 20:12:52.813911   22294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 20:12:52.833795   22294 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 20:12:52.833864   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.844958   22294 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 20:12:52.845039   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.856226   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.867258   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.878052   22294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 20:12:52.889287   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.899601   22294 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.916357   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.926649   22294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 20:12:52.935722   22294 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 20:12:52.935786   22294 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 20:12:52.956751   22294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 20:12:52.967943   22294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:12:53.084220   22294 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 20:12:53.215347   22294 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 20:12:53.215425   22294 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 20:12:53.219998   22294 start.go:562] Will wait 60s for crictl version
	I0612 20:12:53.220048   22294 ssh_runner.go:195] Run: which crictl
	I0612 20:12:53.223692   22294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 20:12:53.264809   22294 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 20:12:53.264914   22294 ssh_runner.go:195] Run: crio --version
	I0612 20:12:53.296197   22294 ssh_runner.go:195] Run: crio --version
	I0612 20:12:53.328882   22294 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 20:12:53.330420   22294 main.go:141] libmachine: (addons-899843) Calling .GetIP
	I0612 20:12:53.332932   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:53.333195   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:53.333215   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:53.333485   22294 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 20:12:53.337798   22294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:12:53.352215   22294 kubeadm.go:877] updating cluster {Name:addons-899843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-899843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 20:12:53.352303   22294 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:12:53.352351   22294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:12:53.383895   22294 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 20:12:53.383947   22294 ssh_runner.go:195] Run: which lz4
	I0612 20:12:53.387860   22294 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 20:12:53.392318   22294 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 20:12:53.392352   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 20:12:54.738861   22294 crio.go:462] duration metric: took 1.351034202s to copy over tarball
	I0612 20:12:54.738937   22294 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 20:12:56.990348   22294 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.251379217s)
	I0612 20:12:56.990379   22294 crio.go:469] duration metric: took 2.251483754s to extract the tarball
	I0612 20:12:56.990387   22294 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 20:12:57.028082   22294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:12:57.071111   22294 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 20:12:57.071133   22294 cache_images.go:84] Images are preloaded, skipping loading
	I0612 20:12:57.071140   22294 kubeadm.go:928] updating node { 192.168.39.248 8443 v1.30.1 crio true true} ...
	I0612 20:12:57.071263   22294 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-899843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-899843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 20:12:57.071346   22294 ssh_runner.go:195] Run: crio config
	I0612 20:12:57.118176   22294 cni.go:84] Creating CNI manager for ""
	I0612 20:12:57.118198   22294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 20:12:57.118206   22294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 20:12:57.118230   22294 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.248 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-899843 NodeName:addons-899843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 20:12:57.118381   22294 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-899843"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 20:12:57.118497   22294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 20:12:57.129959   22294 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 20:12:57.130023   22294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 20:12:57.140846   22294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0612 20:12:57.157983   22294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 20:12:57.174539   22294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0612 20:12:57.191073   22294 ssh_runner.go:195] Run: grep 192.168.39.248	control-plane.minikube.internal$ /etc/hosts
	I0612 20:12:57.194915   22294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:12:57.208228   22294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:12:57.343213   22294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:12:57.362130   22294 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843 for IP: 192.168.39.248
	I0612 20:12:57.362166   22294 certs.go:194] generating shared ca certs ...
	I0612 20:12:57.362192   22294 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.362366   22294 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 20:12:57.669661   22294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt ...
	I0612 20:12:57.669688   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt: {Name:mkd1af81bf97f1c0885dd57c35a317726bd3e69a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.669855   22294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key ...
	I0612 20:12:57.669869   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key: {Name:mka2d81b38abf69ca1705fcee8bcf4cdf7c55924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.669979   22294 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 20:12:57.785832   22294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt ...
	I0612 20:12:57.785859   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt: {Name:mk6e97d71149d268999fee6d2feb14575dee2d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.786051   22294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key ...
	I0612 20:12:57.786065   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key: {Name:mkcd801bf7fc5f2fae41be0bda174154814a3e88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.786164   22294 certs.go:256] generating profile certs ...
	I0612 20:12:57.786220   22294 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.key
	I0612 20:12:57.786233   22294 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt with IP's: []
	I0612 20:12:57.997556   22294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt ...
	I0612 20:12:57.997585   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: {Name:mk4ba79d69ef12d7b904cd7b47ee6e16bfd1f7cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.997769   22294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.key ...
	I0612 20:12:57.997783   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.key: {Name:mk2e6da6374b0c00451e88928371fd21bdb19d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.997876   22294 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key.e101c7ef
	I0612 20:12:57.997896   22294 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt.e101c7ef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.248]
	I0612 20:12:58.060260   22294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt.e101c7ef ...
	I0612 20:12:58.060286   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt.e101c7ef: {Name:mk1b60ef7b26a48410dbad630333449f4eecbb22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:58.060451   22294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key.e101c7ef ...
	I0612 20:12:58.060465   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key.e101c7ef: {Name:mkaa467e4110b4f3f44dea8e097d45db83fece80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:58.060555   22294 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt.e101c7ef -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt
	I0612 20:12:58.060626   22294 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key.e101c7ef -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key
	I0612 20:12:58.060674   22294 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.key
	I0612 20:12:58.060690   22294 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.crt with IP's: []
	I0612 20:12:58.163482   22294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.crt ...
	I0612 20:12:58.163509   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.crt: {Name:mk55f789d5f0a08841bd1cf3c48a5bbb02e1b769 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:58.163685   22294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.key ...
	I0612 20:12:58.163698   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.key: {Name:mkd8c4c104647278f13822fac0e7b4f1aec25fd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:58.163890   22294 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 20:12:58.163928   22294 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 20:12:58.163951   22294 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 20:12:58.163974   22294 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 20:12:58.164538   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 20:12:58.205900   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 20:12:58.254025   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 20:12:58.278333   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 20:12:58.302618   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0612 20:12:58.326804   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 20:12:58.349935   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 20:12:58.374082   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0612 20:12:58.397196   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 20:12:58.420771   22294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 20:12:58.437594   22294 ssh_runner.go:195] Run: openssl version
	I0612 20:12:58.443733   22294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 20:12:58.455367   22294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:12:58.460309   22294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:12:58.460367   22294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:12:58.466786   22294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 20:12:58.478651   22294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 20:12:58.483154   22294 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 20:12:58.483223   22294 kubeadm.go:391] StartCluster: {Name:addons-899843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-899843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:12:58.483295   22294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 20:12:58.483334   22294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 20:12:58.527011   22294 cri.go:89] found id: ""
	I0612 20:12:58.527087   22294 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 20:12:58.538058   22294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 20:12:58.548594   22294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 20:12:58.559198   22294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 20:12:58.559222   22294 kubeadm.go:156] found existing configuration files:
	
	I0612 20:12:58.559272   22294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 20:12:58.569359   22294 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 20:12:58.569413   22294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 20:12:58.579573   22294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 20:12:58.589537   22294 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 20:12:58.589591   22294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 20:12:58.599451   22294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 20:12:58.608577   22294 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 20:12:58.608627   22294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 20:12:58.618359   22294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 20:12:58.627643   22294 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 20:12:58.627686   22294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 20:12:58.637024   22294 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 20:12:58.706936   22294 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 20:12:58.707002   22294 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 20:12:58.826862   22294 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 20:12:58.827025   22294 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 20:12:58.827183   22294 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 20:12:59.058094   22294 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 20:12:59.275457   22294 out.go:204]   - Generating certificates and keys ...
	I0612 20:12:59.275565   22294 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 20:12:59.275662   22294 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 20:12:59.329967   22294 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 20:12:59.555367   22294 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 20:12:59.797115   22294 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 20:12:59.866831   22294 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 20:13:00.090142   22294 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 20:13:00.090271   22294 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-899843 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	I0612 20:13:00.383818   22294 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 20:13:00.384041   22294 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-899843 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	I0612 20:13:00.527310   22294 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 20:13:00.710065   22294 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 20:13:00.945971   22294 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 20:13:00.946093   22294 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 20:13:01.076251   22294 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 20:13:01.433324   22294 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 20:13:01.628583   22294 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 20:13:01.961599   22294 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 20:13:02.231153   22294 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 20:13:02.231662   22294 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 20:13:02.235487   22294 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 20:13:02.237388   22294 out.go:204]   - Booting up control plane ...
	I0612 20:13:02.237492   22294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 20:13:02.237604   22294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 20:13:02.237696   22294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 20:13:02.252487   22294 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 20:13:02.253468   22294 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 20:13:02.253549   22294 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 20:13:02.389148   22294 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 20:13:02.389289   22294 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 20:13:03.389831   22294 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001341582s
	I0612 20:13:03.389930   22294 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 20:13:08.388793   22294 kubeadm.go:309] [api-check] The API server is healthy after 5.001153602s
	I0612 20:13:08.406642   22294 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 20:13:08.422882   22294 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 20:13:08.457258   22294 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 20:13:08.457563   22294 kubeadm.go:309] [mark-control-plane] Marking the node addons-899843 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 20:13:08.471343   22294 kubeadm.go:309] [bootstrap-token] Using token: ix88o6.5ao8ybr6u6nckbj4
	I0612 20:13:08.472713   22294 out.go:204]   - Configuring RBAC rules ...
	I0612 20:13:08.472932   22294 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 20:13:08.477906   22294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 20:13:08.490328   22294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 20:13:08.493664   22294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 20:13:08.497300   22294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 20:13:08.501087   22294 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 20:13:08.799470   22294 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 20:13:09.233690   22294 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 20:13:09.807309   22294 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 20:13:09.808272   22294 kubeadm.go:309] 
	I0612 20:13:09.808340   22294 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 20:13:09.808351   22294 kubeadm.go:309] 
	I0612 20:13:09.808438   22294 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 20:13:09.808463   22294 kubeadm.go:309] 
	I0612 20:13:09.808521   22294 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 20:13:09.808604   22294 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 20:13:09.808689   22294 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 20:13:09.808699   22294 kubeadm.go:309] 
	I0612 20:13:09.808778   22294 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 20:13:09.808788   22294 kubeadm.go:309] 
	I0612 20:13:09.808853   22294 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 20:13:09.808862   22294 kubeadm.go:309] 
	I0612 20:13:09.808932   22294 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 20:13:09.809028   22294 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 20:13:09.809176   22294 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 20:13:09.809194   22294 kubeadm.go:309] 
	I0612 20:13:09.809301   22294 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 20:13:09.809401   22294 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 20:13:09.809419   22294 kubeadm.go:309] 
	I0612 20:13:09.809524   22294 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ix88o6.5ao8ybr6u6nckbj4 \
	I0612 20:13:09.809663   22294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 20:13:09.809704   22294 kubeadm.go:309] 	--control-plane 
	I0612 20:13:09.809714   22294 kubeadm.go:309] 
	I0612 20:13:09.809896   22294 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 20:13:09.809914   22294 kubeadm.go:309] 
	I0612 20:13:09.810013   22294 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ix88o6.5ao8ybr6u6nckbj4 \
	I0612 20:13:09.810144   22294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 20:13:09.810379   22294 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 20:13:09.810410   22294 cni.go:84] Creating CNI manager for ""
	I0612 20:13:09.810419   22294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 20:13:09.812440   22294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 20:13:09.813908   22294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 20:13:09.827476   22294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 20:13:09.855803   22294 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 20:13:09.855927   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-899843 minikube.k8s.io/updated_at=2024_06_12T20_13_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=addons-899843 minikube.k8s.io/primary=true
	I0612 20:13:09.855931   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:09.983410   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:10.017604   22294 ops.go:34] apiserver oom_adj: -16
	I0612 20:13:10.483963   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:10.984405   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:11.484414   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:11.983809   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:12.483844   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:12.983944   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:13.484138   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:13.984279   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:14.483512   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:14.983514   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:15.483549   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:15.984310   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:16.484133   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:16.984066   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:17.483582   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:17.983873   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:18.483584   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:18.983588   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:19.484002   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:19.984354   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:20.484367   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:20.983585   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:21.483828   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:21.983793   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:22.067919   22294 kubeadm.go:1107] duration metric: took 12.212071248s to wait for elevateKubeSystemPrivileges
	W0612 20:13:22.067957   22294 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 20:13:22.067967   22294 kubeadm.go:393] duration metric: took 23.584748426s to StartCluster
	I0612 20:13:22.067988   22294 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:13:22.068115   22294 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:13:22.068462   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:13:22.068645   22294 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0612 20:13:22.068662   22294 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:13:22.070636   22294 out.go:177] * Verifying Kubernetes components...
	I0612 20:13:22.068721   22294 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0612 20:13:22.068846   22294 config.go:182] Loaded profile config "addons-899843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:13:22.071939   22294 addons.go:69] Setting yakd=true in profile "addons-899843"
	I0612 20:13:22.071945   22294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:13:22.071955   22294 addons.go:69] Setting cloud-spanner=true in profile "addons-899843"
	I0612 20:13:22.071972   22294 addons.go:234] Setting addon yakd=true in "addons-899843"
	I0612 20:13:22.071977   22294 addons.go:69] Setting gcp-auth=true in profile "addons-899843"
	I0612 20:13:22.072022   22294 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-899843"
	I0612 20:13:22.072051   22294 mustload.go:65] Loading cluster: addons-899843
	I0612 20:13:22.072084   22294 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-899843"
	I0612 20:13:22.072118   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.071969   22294 addons.go:69] Setting registry=true in profile "addons-899843"
	I0612 20:13:22.072171   22294 addons.go:234] Setting addon registry=true in "addons-899843"
	I0612 20:13:22.072197   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.071983   22294 addons.go:234] Setting addon cloud-spanner=true in "addons-899843"
	I0612 20:13:22.072244   22294 config.go:182] Loaded profile config "addons-899843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:13:22.072267   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.071990   22294 addons.go:69] Setting helm-tiller=true in profile "addons-899843"
	I0612 20:13:22.072354   22294 addons.go:234] Setting addon helm-tiller=true in "addons-899843"
	I0612 20:13:22.072388   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.071990   22294 addons.go:69] Setting inspektor-gadget=true in profile "addons-899843"
	I0612 20:13:22.072443   22294 addons.go:234] Setting addon inspektor-gadget=true in "addons-899843"
	I0612 20:13:22.072471   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.071996   22294 addons.go:69] Setting metrics-server=true in profile "addons-899843"
	I0612 20:13:22.072530   22294 addons.go:234] Setting addon metrics-server=true in "addons-899843"
	I0612 20:13:22.072561   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.072568   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.072576   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.072583   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.071946   22294 addons.go:69] Setting ingress-dns=true in profile "addons-899843"
	I0612 20:13:22.072598   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072600   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072614   22294 addons.go:234] Setting addon ingress-dns=true in "addons-899843"
	I0612 20:13:22.072639   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.072645   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.072665   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072771   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.071995   22294 addons.go:69] Setting ingress=true in profile "addons-899843"
	I0612 20:13:22.072801   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072800   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.072813   22294 addons.go:234] Setting addon ingress=true in "addons-899843"
	I0612 20:13:22.072821   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072833   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072000   22294 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-899843"
	I0612 20:13:22.072858   22294 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-899843"
	I0612 20:13:22.072002   22294 addons.go:69] Setting default-storageclass=true in profile "addons-899843"
	I0612 20:13:22.072007   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.072888   22294 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-899843"
	I0612 20:13:22.072005   22294 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-899843"
	I0612 20:13:22.072913   22294 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-899843"
	I0612 20:13:22.072009   22294 addons.go:69] Setting storage-provisioner=true in profile "addons-899843"
	I0612 20:13:22.072934   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.072953   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072951   22294 addons.go:234] Setting addon storage-provisioner=true in "addons-899843"
	I0612 20:13:22.072012   22294 addons.go:69] Setting volumesnapshots=true in profile "addons-899843"
	I0612 20:13:22.072974   22294 addons.go:234] Setting addon volumesnapshots=true in "addons-899843"
	I0612 20:13:22.073106   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.073213   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073231   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073231   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073249   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073261   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073277   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072006   22294 addons.go:69] Setting volcano=true in profile "addons-899843"
	I0612 20:13:22.073308   22294 addons.go:234] Setting addon volcano=true in "addons-899843"
	I0612 20:13:22.073377   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.073439   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073467   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073513   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.073520   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073544   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.073550   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073699   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073725   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073838   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073854   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073870   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073878   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.074079   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.074445   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.074481   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.092809   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43405
	I0612 20:13:22.093139   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0612 20:13:22.093237   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I0612 20:13:22.093493   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.093899   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.093901   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I0612 20:13:22.094008   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.094092   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.094424   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I0612 20:13:22.094487   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.094502   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.094502   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.094586   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.094859   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.095041   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.095060   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.095094   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.095224   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.095443   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.095492   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.095514   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.095625   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.095639   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.095813   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.095969   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.103632   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.103642   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.103678   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.103682   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.103638   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.103750   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.103942   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.103972   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.104024   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.104052   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.111630   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0612 20:13:22.112271   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.112818   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.112842   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.113167   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.113741   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.113772   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.139974   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36783
	I0612 20:13:22.140197   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0612 20:13:22.140319   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0612 20:13:22.140600   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.140839   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.140944   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.141073   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.141097   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.141607   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.141623   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.141751   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.141763   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.141826   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.142365   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.142402   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.142605   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.142613   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.142802   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.142935   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.144892   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.145096   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I0612 20:13:22.145211   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0612 20:13:22.147424   22294 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0612 20:13:22.145825   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.145876   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.146213   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.146997   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0612 20:13:22.148921   22294 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0612 20:13:22.148938   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0612 20:13:22.148957   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.149953   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.149970   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.149977   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.150034   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0612 20:13:22.151891   22294 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0612 20:13:22.150213   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.150800   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.150833   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.151686   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.153298   22294 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0612 20:13:22.153309   22294 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0612 20:13:22.153328   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.153359   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.153397   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.154682   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.154688   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43173
	I0612 20:13:22.154715   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.154689   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.154742   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.154791   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.154806   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.155403   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.155469   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.155553   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.155774   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.155816   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.156620   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.157216   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.157234   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.157527   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.157541   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.157884   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.157927   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.158085   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.158129   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.158566   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.158785   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.159298   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.159380   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.161578   22294 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0612 20:13:22.161210   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.162068   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.163048   22294 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0612 20:13:22.164658   22294 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0612 20:13:22.164689   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0612 20:13:22.164705   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.162783   22294 addons.go:234] Setting addon default-storageclass=true in "addons-899843"
	I0612 20:13:22.164778   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.165139   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.165173   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.162787   22294 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-899843"
	I0612 20:13:22.166507   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.166864   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.166899   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.168649   22294 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0612 20:13:22.163106   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.163265   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.163931   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39011
	I0612 20:13:22.170087   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.170128   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.171602   22294 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0612 20:13:22.170483   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.170725   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.171455   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.171494   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.173191   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.173310   22294 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0612 20:13:22.173327   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0612 20:13:22.173351   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.173356   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.174613   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.174632   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.174694   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36473
	I0612 20:13:22.174828   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.175109   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.175163   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.175357   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.175600   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.176145   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.176161   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.176616   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.176638   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.177105   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.177324   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.178558   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.178928   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.178945   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.179132   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.179330   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.179498   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.179555   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.179737   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.181965   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0612 20:13:22.181007   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43275
	I0612 20:13:22.181199   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36741
	I0612 20:13:22.181417   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0612 20:13:22.182409   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37051
	I0612 20:13:22.188176   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0612 20:13:22.183767   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.183913   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.184271   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.187211   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I0612 20:13:22.187216   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39637
	I0612 20:13:22.187221   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0612 20:13:22.187640   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.187899   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0612 20:13:22.188943   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0612 20:13:22.191885   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0612 20:13:22.190083   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.190456   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.190522   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.190863   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.190892   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.190964   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.191071   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.191237   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.191350   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.193240   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.193313   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.194736   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0612 20:13:22.193370   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.193413   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.193741   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.193780   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.193950   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.194033   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.194119   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.194199   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.194515   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.196094   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.197590   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0612 20:13:22.196312   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.196325   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.196340   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.196379   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.196773   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.196791   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.196792   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.196839   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.197086   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.200360   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0612 20:13:22.198925   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.199281   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.199291   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.199345   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.199366   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.199659   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.200211   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.200458   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.200475   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.201267   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0612 20:13:22.203433   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0612 20:13:22.201721   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.201838   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.202118   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.202138   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.202340   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.202424   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.203307   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.203702   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.204441   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.207233   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0612 20:13:22.205503   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.205813   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40247
	I0612 20:13:22.205841   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.205854   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.205899   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.206381   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.207404   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.208506   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0612 20:13:22.209729   22294 out.go:177]   - Using image docker.io/registry:2.8.3
	I0612 20:13:22.211156   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.211191   22294 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 20:13:22.211255   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0612 20:13:22.211479   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:22.212579   22294 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0612 20:13:22.212590   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:22.215611   22294 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0612 20:13:22.215627   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0612 20:13:22.215643   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.217299   22294 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 20:13:22.217314   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 20:13:22.217329   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.212677   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.213029   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:22.213058   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:22.217410   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:22.217419   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:22.217427   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:22.214573   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.217705   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:22.217733   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:22.217741   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	W0612 20:13:22.217816   22294 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0612 20:13:22.219754   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.220884   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.221362   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.221390   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.221545   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.221594   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.221803   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.222068   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.222162   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.222358   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.223981   22294 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0612 20:13:22.222622   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.222798   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.222955   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.222995   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.223222   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.225588   22294 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 20:13:22.225600   22294 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 20:13:22.225619   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.225929   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.225951   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.226227   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.226281   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.226553   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.226602   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.226754   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.226835   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.229727   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36193
	I0612 20:13:22.230178   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.230696   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.230712   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.230770   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44791
	I0612 20:13:22.230916   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44475
	I0612 20:13:22.231262   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.231353   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.231399   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.231961   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45147
	I0612 20:13:22.231975   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.232002   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.231964   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.232058   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.232058   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.232061   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.232082   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.232456   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.232491   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.232502   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.232555   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.232662   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.232800   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.232854   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.233039   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.241426   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0612 20:13:22.241575   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.241681   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.241742   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.241782   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.241951   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I0612 20:13:22.242116   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.242120   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.242242   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.242254   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.242565   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.242895   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.243153   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.243212   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.243550   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.243565   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.243681   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.243916   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.244324   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.244392   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.247156   22294 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0612 20:13:22.245585   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.246549   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.246778   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.246920   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.248668   22294 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0612 20:13:22.248682   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.248684   22294 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0612 20:13:22.248757   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.250280   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0612 20:13:22.251808   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0612 20:13:22.251826   22294 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0612 20:13:22.251847   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.250476   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.253569   22294 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0612 20:13:22.250500   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.252517   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.252557   22294 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 20:13:22.252910   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.254645   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.255267   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.255287   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.255319   22294 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0612 20:13:22.255331   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0612 20:13:22.255343   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.255345   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.255361   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.255368   22294 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 20:13:22.255379   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.255217   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.255963   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.256042   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.256084   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.256127   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.256194   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.256235   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.256636   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.258394   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.260383   22294 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0612 20:13:22.259234   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.259824   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.259835   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.260301   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.262157   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.262180   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.262158   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.262199   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.262274   22294 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0612 20:13:22.262285   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0612 20:13:22.262303   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.262419   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.262502   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.262558   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.262666   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.262921   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.263057   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.265195   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.265612   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.265634   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.265718   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.265890   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.266026   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.266182   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	W0612 20:13:22.266610   22294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57956->192.168.39.248:22: read: connection reset by peer
	I0612 20:13:22.266635   22294 retry.go:31] will retry after 266.514003ms: ssh: handshake failed: read tcp 192.168.39.1:57956->192.168.39.248:22: read: connection reset by peer
	W0612 20:13:22.267105   22294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57972->192.168.39.248:22: read: connection reset by peer
	I0612 20:13:22.267120   22294 retry.go:31] will retry after 197.996218ms: ssh: handshake failed: read tcp 192.168.39.1:57972->192.168.39.248:22: read: connection reset by peer
	I0612 20:13:22.273839   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I0612 20:13:22.274192   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.274690   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.274710   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.275048   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.275277   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.276907   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.278886   22294 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0612 20:13:22.280507   22294 out.go:177]   - Using image docker.io/busybox:stable
	I0612 20:13:22.282013   22294 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0612 20:13:22.282035   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0612 20:13:22.282056   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.284749   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.285141   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.285166   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.285299   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.285478   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.285614   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.285792   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	W0612 20:13:22.287866   22294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57980->192.168.39.248:22: read: connection reset by peer
	I0612 20:13:22.287887   22294 retry.go:31] will retry after 212.825352ms: ssh: handshake failed: read tcp 192.168.39.1:57980->192.168.39.248:22: read: connection reset by peer
	I0612 20:13:22.504444   22294 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0612 20:13:22.504467   22294 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0612 20:13:22.580185   22294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:13:22.580223   22294 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0612 20:13:22.599270   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0612 20:13:22.642311   22294 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0612 20:13:22.642340   22294 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0612 20:13:22.668236   22294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 20:13:22.668267   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0612 20:13:22.718448   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0612 20:13:22.735200   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0612 20:13:22.735225   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0612 20:13:22.834509   22294 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0612 20:13:22.834537   22294 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0612 20:13:22.866635   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 20:13:22.871072   22294 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0612 20:13:22.871098   22294 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0612 20:13:22.911158   22294 node_ready.go:35] waiting up to 6m0s for node "addons-899843" to be "Ready" ...
	I0612 20:13:22.914514   22294 node_ready.go:49] node "addons-899843" has status "Ready":"True"
	I0612 20:13:22.914537   22294 node_ready.go:38] duration metric: took 3.330668ms for node "addons-899843" to be "Ready" ...
	I0612 20:13:22.914546   22294 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:13:22.921162   22294 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vcczk" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:22.950633   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0612 20:13:22.951587   22294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 20:13:22.951612   22294 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 20:13:22.955366   22294 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0612 20:13:22.955388   22294 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0612 20:13:22.999792   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0612 20:13:23.002883   22294 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0612 20:13:23.002902   22294 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0612 20:13:23.009468   22294 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0612 20:13:23.009492   22294 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0612 20:13:23.099146   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0612 20:13:23.099179   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0612 20:13:23.143821   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0612 20:13:23.149958   22294 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0612 20:13:23.149981   22294 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0612 20:13:23.230052   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 20:13:23.293946   22294 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0612 20:13:23.293976   22294 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0612 20:13:23.297354   22294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 20:13:23.297379   22294 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 20:13:23.321358   22294 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0612 20:13:23.321379   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0612 20:13:23.328583   22294 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0612 20:13:23.328597   22294 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0612 20:13:23.334853   22294 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0612 20:13:23.334877   22294 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0612 20:13:23.354171   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0612 20:13:23.464782   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0612 20:13:23.464805   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0612 20:13:23.569896   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0612 20:13:23.569921   22294 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0612 20:13:23.584845   22294 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0612 20:13:23.584873   22294 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0612 20:13:23.598618   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0612 20:13:23.603990   22294 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0612 20:13:23.604021   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0612 20:13:23.609170   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 20:13:23.750175   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0612 20:13:23.750215   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0612 20:13:23.755982   22294 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0612 20:13:23.756012   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0612 20:13:23.777292   22294 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0612 20:13:23.777319   22294 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0612 20:13:23.780935   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0612 20:13:23.923604   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0612 20:13:23.923631   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0612 20:13:24.107403   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0612 20:13:24.175358   22294 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0612 20:13:24.175378   22294 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0612 20:13:24.316376   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0612 20:13:24.316398   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0612 20:13:24.493895   22294 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0612 20:13:24.493919   22294 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0612 20:13:24.627481   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0612 20:13:24.627508   22294 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0612 20:13:24.861133   22294 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0612 20:13:24.861158   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0612 20:13:24.927586   22294 pod_ready.go:102] pod "coredns-7db6d8ff4d-vcczk" in "kube-system" namespace has status "Ready":"False"
	I0612 20:13:25.039325   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0612 20:13:25.039356   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0612 20:13:25.200943   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0612 20:13:25.221045   22294 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.640784377s)
	I0612 20:13:25.221088   22294 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0612 20:13:25.679128   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0612 20:13:25.679154   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0612 20:13:25.728061   22294 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-899843" context rescaled to 1 replicas
	I0612 20:13:26.026057   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0612 20:13:26.026086   22294 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0612 20:13:26.484352   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0612 20:13:27.013000   22294 pod_ready.go:102] pod "coredns-7db6d8ff4d-vcczk" in "kube-system" namespace has status "Ready":"False"
	I0612 20:13:27.973811   22294 pod_ready.go:92] pod "coredns-7db6d8ff4d-vcczk" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:27.973848   22294 pod_ready.go:81] duration metric: took 5.052657039s for pod "coredns-7db6d8ff4d-vcczk" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:27.973862   22294 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-whsws" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.098852   22294 pod_ready.go:92] pod "coredns-7db6d8ff4d-whsws" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.098877   22294 pod_ready.go:81] duration metric: took 125.007716ms for pod "coredns-7db6d8ff4d-whsws" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.098888   22294 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.215226   22294 pod_ready.go:92] pod "etcd-addons-899843" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.215263   22294 pod_ready.go:81] duration metric: took 116.367988ms for pod "etcd-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.215277   22294 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.281296   22294 pod_ready.go:92] pod "kube-apiserver-addons-899843" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.281327   22294 pod_ready.go:81] duration metric: took 66.04148ms for pod "kube-apiserver-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.281340   22294 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.322865   22294 pod_ready.go:92] pod "kube-controller-manager-addons-899843" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.322899   22294 pod_ready.go:81] duration metric: took 41.550583ms for pod "kube-controller-manager-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.322913   22294 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rbbmx" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.361318   22294 pod_ready.go:92] pod "kube-proxy-rbbmx" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.361341   22294 pod_ready.go:81] duration metric: took 38.421415ms for pod "kube-proxy-rbbmx" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.361350   22294 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.743251   22294 pod_ready.go:92] pod "kube-scheduler-addons-899843" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.743287   22294 pod_ready.go:81] duration metric: took 381.916619ms for pod "kube-scheduler-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.743298   22294 pod_ready.go:38] duration metric: took 5.828741017s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:13:28.743315   22294 api_server.go:52] waiting for apiserver process to appear ...
	I0612 20:13:28.743371   22294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:13:29.338832   22294 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0612 20:13:29.338872   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:29.342245   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:29.342737   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:29.342769   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:29.342979   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:29.343225   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:29.343395   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:29.343527   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:30.139855   22294 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0612 20:13:30.291694   22294 addons.go:234] Setting addon gcp-auth=true in "addons-899843"
	I0612 20:13:30.291755   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:30.292188   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:30.292231   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:30.308275   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I0612 20:13:30.308808   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:30.309324   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:30.309350   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:30.309659   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:30.310249   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:30.310301   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:30.327660   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0612 20:13:30.328099   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:30.328601   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:30.328616   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:30.328980   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:30.329219   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:30.331231   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:30.331456   22294 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0612 20:13:30.331490   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:30.334646   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:30.335276   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:30.335305   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:30.335526   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:30.335729   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:30.335905   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:30.336053   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:31.426367   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.827060925s)
	I0612 20:13:31.426419   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426422   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.707943214s)
	I0612 20:13:31.426461   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426472   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426478   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.559811913s)
	I0612 20:13:31.426496   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426505   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426431   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426566   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.475900289s)
	I0612 20:13:31.426594   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426605   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426630   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.42680937s)
	I0612 20:13:31.426672   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426673   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.282825584s)
	I0612 20:13:31.426682   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426705   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426717   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426783   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.196708296s)
	I0612 20:13:31.426801   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426808   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426891   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.072683305s)
	I0612 20:13:31.426928   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.426910   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426938   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.426943   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426969   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.828321802s)
	I0612 20:13:31.427027   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.427031   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.427036   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.427041   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.427044   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.427050   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.427095   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.427102   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.427109   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.427116   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.427191   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.817984227s)
	I0612 20:13:31.427220   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.427235   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426986   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.428605   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.428616   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.428626   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.428638   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.428789   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.428814   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.428821   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.428829   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.428836   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.428922   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.428984   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.429004   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.429010   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.427008   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.429244   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.429254   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.429261   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.429344   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.648365965s)
	I0612 20:13:31.429361   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.429367   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.430024   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.32258049s)
	W0612 20:13:31.430062   22294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0612 20:13:31.430093   22294 retry.go:31] will retry after 153.385095ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0612 20:13:31.430193   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.2292178s)
	I0612 20:13:31.430211   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.430220   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.430284   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430306   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430312   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.430322   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.430328   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.430369   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430386   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430392   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.430401   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430444   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430466   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430472   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.430529   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430547   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430553   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.430560   22294 addons.go:475] Verifying addon ingress=true in "addons-899843"
	I0612 20:13:31.432619   22294 out.go:177] * Verifying ingress addon...
	I0612 20:13:31.430792   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430813   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430954   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430978   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430987   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430999   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.431003   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.431017   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.431034   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.431037   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.431055   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.431061   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.431075   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.432090   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.432696   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432698   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432733   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.434204   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.435485   22294 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-899843 service yakd-dashboard -n yakd-dashboard
	
	I0612 20:13:31.432746   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432753   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432760   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432127   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.432764   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432770   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.434208   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.434217   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.434892   22294 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0612 20:13:31.436874   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.436893   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.436909   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.436926   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.436959   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.436895   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.436972   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.436975   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.436980   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.437398   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.437414   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.437422   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.437429   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.437439   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.437444   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.437445   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.437452   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.437430   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.437587   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.437643   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.437652   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.437660   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.437668   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.437669   22294 addons.go:475] Verifying addon registry=true in "addons-899843"
	I0612 20:13:31.439100   22294 out.go:177] * Verifying registry addon...
	I0612 20:13:31.437809   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.440877   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.440894   22294 addons.go:475] Verifying addon metrics-server=true in "addons-899843"
	I0612 20:13:31.441648   22294 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0612 20:13:31.461798   22294 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0612 20:13:31.461820   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:31.462149   22294 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0612 20:13:31.462171   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:31.480792   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.480815   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.481144   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.481190   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.481198   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	W0612 20:13:31.481307   22294 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0612 20:13:31.484413   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.484432   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.484708   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.484720   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.484728   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.584187   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0612 20:13:31.942626   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:31.950818   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:32.442737   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:32.445433   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:32.941388   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:32.950965   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:33.479709   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:33.481193   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:33.623003   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.1385913s)
	I0612 20:13:33.623074   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:33.623080   22294 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.87968245s)
	I0612 20:13:33.623112   22294 api_server.go:72] duration metric: took 11.55442479s to wait for apiserver process to appear ...
	I0612 20:13:33.623123   22294 api_server.go:88] waiting for apiserver healthz status ...
	I0612 20:13:33.623147   22294 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I0612 20:13:33.623090   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:33.623112   22294 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.291635112s)
	I0612 20:13:33.624906   22294 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0612 20:13:33.623516   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:33.623592   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:33.626371   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:33.626393   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:33.626403   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:33.627779   22294 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0612 20:13:33.626634   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:33.626669   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:33.629060   22294 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0612 20:13:33.629068   22294 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0612 20:13:33.629100   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:33.629132   22294 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-899843"
	I0612 20:13:33.630648   22294 out.go:177] * Verifying csi-hostpath-driver addon...
	I0612 20:13:33.632948   22294 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0612 20:13:33.660297   22294 api_server.go:279] https://192.168.39.248:8443/healthz returned 200:
	ok
	I0612 20:13:33.671589   22294 api_server.go:141] control plane version: v1.30.1
	I0612 20:13:33.671613   22294 api_server.go:131] duration metric: took 48.483679ms to wait for apiserver health ...
	I0612 20:13:33.671621   22294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 20:13:33.673170   22294 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0612 20:13:33.673194   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:33.719700   22294 system_pods.go:59] 19 kube-system pods found
	I0612 20:13:33.719745   22294 system_pods.go:61] "coredns-7db6d8ff4d-vcczk" [df3fef56-31ac-482e-a39b-29b00592b53b] Running
	I0612 20:13:33.719753   22294 system_pods.go:61] "coredns-7db6d8ff4d-whsws" [ad628dac-001d-4531-89fd-33629dcc54cb] Running
	I0612 20:13:33.719764   22294 system_pods.go:61] "csi-hostpath-attacher-0" [ba878465-f2b1-4c7e-a56e-791040338b12] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0612 20:13:33.719771   22294 system_pods.go:61] "csi-hostpath-resizer-0" [c1fba905-dff2-4f6b-8226-27d1530fe067] Pending
	I0612 20:13:33.719782   22294 system_pods.go:61] "csi-hostpathplugin-h9np6" [066343ae-5c77-4a5c-b973-ce1972c4816d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0612 20:13:33.719787   22294 system_pods.go:61] "etcd-addons-899843" [6762c9cc-df6c-48de-9bee-553b979bc90e] Running
	I0612 20:13:33.719793   22294 system_pods.go:61] "kube-apiserver-addons-899843" [1b709cc7-14d9-472a-9fd2-14f675696c51] Running
	I0612 20:13:33.719801   22294 system_pods.go:61] "kube-controller-manager-addons-899843" [77707797-5a1b-457f-9628-708c30b7209f] Running
	I0612 20:13:33.719809   22294 system_pods.go:61] "kube-ingress-dns-minikube" [fe4b4575-3547-4019-bc49-d7599aaaedc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0612 20:13:33.719819   22294 system_pods.go:61] "kube-proxy-rbbmx" [07785176-2ce1-4304-992e-8962b08939db] Running
	I0612 20:13:33.719825   22294 system_pods.go:61] "kube-scheduler-addons-899843" [2204b584-b2c5-4c49-924c-17b3552682a1] Running
	I0612 20:13:33.719833   22294 system_pods.go:61] "metrics-server-c59844bb4-g6s5d" [4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 20:13:33.719846   22294 system_pods.go:61] "nvidia-device-plugin-daemonset-7t2hk" [318904a0-3329-4548-9694-082dce3d63ff] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0612 20:13:33.719860   22294 system_pods.go:61] "registry-d4wfp" [4dedad66-548d-4156-a741-4077e86eb02b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0612 20:13:33.719874   22294 system_pods.go:61] "registry-proxy-l4fcl" [947cca02-a2df-4d5e-b84a-0cb7bb05d876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0612 20:13:33.719886   22294 system_pods.go:61] "snapshot-controller-745499f584-2ctxc" [7350c859-7403-48dd-8f17-716af45a66e0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0612 20:13:33.719898   22294 system_pods.go:61] "snapshot-controller-745499f584-flslf" [143d8fc1-b352-4a2d-a199-4c29ea465493] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0612 20:13:33.719909   22294 system_pods.go:61] "storage-provisioner" [5aa128d9-0268-4ed7-9ba8-a3405add5dd5] Running
	I0612 20:13:33.719920   22294 system_pods.go:61] "tiller-deploy-6677d64bcd-wrb4j" [d5a32aea-e711-4681-8246-f238b7566914] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0612 20:13:33.719932   22294 system_pods.go:74] duration metric: took 48.304478ms to wait for pod list to return data ...
	I0612 20:13:33.719946   22294 default_sa.go:34] waiting for default service account to be created ...
	I0612 20:13:33.733254   22294 default_sa.go:45] found service account: "default"
	I0612 20:13:33.733279   22294 default_sa.go:55] duration metric: took 13.322298ms for default service account to be created ...
	I0612 20:13:33.733290   22294 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 20:13:33.763168   22294 system_pods.go:86] 19 kube-system pods found
	I0612 20:13:33.763216   22294 system_pods.go:89] "coredns-7db6d8ff4d-vcczk" [df3fef56-31ac-482e-a39b-29b00592b53b] Running
	I0612 20:13:33.763224   22294 system_pods.go:89] "coredns-7db6d8ff4d-whsws" [ad628dac-001d-4531-89fd-33629dcc54cb] Running
	I0612 20:13:33.763234   22294 system_pods.go:89] "csi-hostpath-attacher-0" [ba878465-f2b1-4c7e-a56e-791040338b12] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0612 20:13:33.763244   22294 system_pods.go:89] "csi-hostpath-resizer-0" [c1fba905-dff2-4f6b-8226-27d1530fe067] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0612 20:13:33.763258   22294 system_pods.go:89] "csi-hostpathplugin-h9np6" [066343ae-5c77-4a5c-b973-ce1972c4816d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0612 20:13:33.763269   22294 system_pods.go:89] "etcd-addons-899843" [6762c9cc-df6c-48de-9bee-553b979bc90e] Running
	I0612 20:13:33.763278   22294 system_pods.go:89] "kube-apiserver-addons-899843" [1b709cc7-14d9-472a-9fd2-14f675696c51] Running
	I0612 20:13:33.763289   22294 system_pods.go:89] "kube-controller-manager-addons-899843" [77707797-5a1b-457f-9628-708c30b7209f] Running
	I0612 20:13:33.763299   22294 system_pods.go:89] "kube-ingress-dns-minikube" [fe4b4575-3547-4019-bc49-d7599aaaedc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0612 20:13:33.763310   22294 system_pods.go:89] "kube-proxy-rbbmx" [07785176-2ce1-4304-992e-8962b08939db] Running
	I0612 20:13:33.763326   22294 system_pods.go:89] "kube-scheduler-addons-899843" [2204b584-b2c5-4c49-924c-17b3552682a1] Running
	I0612 20:13:33.763341   22294 system_pods.go:89] "metrics-server-c59844bb4-g6s5d" [4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 20:13:33.763354   22294 system_pods.go:89] "nvidia-device-plugin-daemonset-7t2hk" [318904a0-3329-4548-9694-082dce3d63ff] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0612 20:13:33.763371   22294 system_pods.go:89] "registry-d4wfp" [4dedad66-548d-4156-a741-4077e86eb02b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0612 20:13:33.763384   22294 system_pods.go:89] "registry-proxy-l4fcl" [947cca02-a2df-4d5e-b84a-0cb7bb05d876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0612 20:13:33.763398   22294 system_pods.go:89] "snapshot-controller-745499f584-2ctxc" [7350c859-7403-48dd-8f17-716af45a66e0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0612 20:13:33.763413   22294 system_pods.go:89] "snapshot-controller-745499f584-flslf" [143d8fc1-b352-4a2d-a199-4c29ea465493] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0612 20:13:33.763424   22294 system_pods.go:89] "storage-provisioner" [5aa128d9-0268-4ed7-9ba8-a3405add5dd5] Running
	I0612 20:13:33.763434   22294 system_pods.go:89] "tiller-deploy-6677d64bcd-wrb4j" [d5a32aea-e711-4681-8246-f238b7566914] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0612 20:13:33.763449   22294 system_pods.go:126] duration metric: took 30.149887ms to wait for k8s-apps to be running ...
	I0612 20:13:33.763461   22294 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 20:13:33.763507   22294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:13:33.812467   22294 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0612 20:13:33.812497   22294 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0612 20:13:33.841382   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.257135351s)
	I0612 20:13:33.841444   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:33.841460   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:33.841811   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:33.841870   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:33.841885   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:33.841901   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:33.841914   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:33.842145   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:33.842179   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:33.842191   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:33.887868   22294 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0612 20:13:33.887899   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0612 20:13:33.915703   22294 system_svc.go:56] duration metric: took 152.2322ms WaitForService to wait for kubelet
	I0612 20:13:33.915733   22294 kubeadm.go:576] duration metric: took 11.847043277s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:13:33.915757   22294 node_conditions.go:102] verifying NodePressure condition ...
	I0612 20:13:33.919911   22294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:13:33.919937   22294 node_conditions.go:123] node cpu capacity is 2
	I0612 20:13:33.919952   22294 node_conditions.go:105] duration metric: took 4.189506ms to run NodePressure ...
	I0612 20:13:33.919967   22294 start.go:240] waiting for startup goroutines ...
	I0612 20:13:33.943407   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:33.947951   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:33.982710   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0612 20:13:34.139733   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:34.443454   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:34.450793   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:34.641665   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:34.945630   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:34.948863   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:35.139664   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:35.454824   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:35.456398   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:35.502512   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.519764711s)
	I0612 20:13:35.502578   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:35.502600   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:35.502924   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:35.502953   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:35.502965   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:35.502973   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:35.502991   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:35.503287   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:35.503341   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:35.505898   22294 addons.go:475] Verifying addon gcp-auth=true in "addons-899843"
	I0612 20:13:35.507700   22294 out.go:177] * Verifying gcp-auth addon...
	I0612 20:13:35.509931   22294 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0612 20:13:35.525388   22294 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0612 20:13:35.525422   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:35.641530   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:35.942177   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:35.946076   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:36.013803   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:36.138965   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:36.440838   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:36.446023   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:36.513866   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:36.638619   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:36.941753   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:36.946365   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:37.014220   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:37.138418   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:37.442873   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:37.447477   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:37.513223   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:37.637951   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:37.942361   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:37.946584   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:38.013244   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:38.138381   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:38.441866   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:38.445739   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:38.513621   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:38.638314   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:38.942584   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:38.946229   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:39.014090   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:39.138744   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:39.441243   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:39.446597   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:39.513632   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:39.640106   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:39.942861   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:39.949471   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:40.014821   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:40.139087   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:40.441225   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:40.446503   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:40.513918   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:40.638240   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:40.942437   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:40.945527   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:41.013013   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:41.142787   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:41.442217   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:41.447663   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:41.513968   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:41.639548   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:41.940600   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:41.945947   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:42.013608   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:42.139280   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:42.440944   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:42.445956   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:42.514292   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:42.638014   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:42.941229   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:42.946634   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:43.014081   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:43.139321   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:43.441941   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:43.446329   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:43.514349   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:43.639084   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:43.941170   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:43.946190   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:44.014724   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:44.139088   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:44.442577   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:44.447193   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:44.514675   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:44.641120   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:44.942873   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:44.946663   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:45.013963   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:45.138550   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:45.440758   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:45.445844   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:45.514534   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:45.638634   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:45.941212   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:45.946328   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:46.012685   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:46.139344   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:46.441587   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:46.445240   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:46.514056   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:46.638638   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:46.940912   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:46.945904   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:47.013910   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:47.138654   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:47.442908   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:47.447432   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:47.513849   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:47.640472   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:47.941419   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:47.946209   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:48.018789   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:48.138764   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:48.440771   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:48.446292   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:48.514151   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:48.639066   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:48.941345   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:48.946377   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:49.014053   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:49.140657   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:49.441933   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:49.445867   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:49.514073   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:49.639364   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:49.941848   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:49.947681   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:50.013603   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:50.242808   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:50.565817   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:50.565950   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:50.567151   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:50.649836   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:50.940922   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:50.946884   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:51.014087   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:51.138769   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:51.442652   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:51.445714   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:51.513919   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:51.640943   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:51.941909   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:51.945878   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:52.014488   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:52.138762   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:52.441823   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:52.446036   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:52.514518   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:52.638524   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:52.941873   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:52.946623   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:53.013095   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:53.142776   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:53.442395   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:53.446151   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:53.514756   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:53.638646   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:53.941531   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:53.947504   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:54.014024   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:54.139212   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:54.442459   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:54.446325   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:54.514055   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:54.639028   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:54.941841   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:54.946233   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:55.014189   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:55.138594   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:55.441414   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:55.445444   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:55.513740   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:55.638506   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:55.940824   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:55.946189   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:56.013977   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:56.139086   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:56.440896   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:56.445886   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:56.513793   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:56.639096   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:56.941136   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:56.949066   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:57.015635   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:57.139378   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:57.442176   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:57.446467   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:57.513827   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:57.638854   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:57.941679   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:57.945342   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:58.013397   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:58.138552   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:58.441522   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:58.445725   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:58.513987   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:58.638838   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:58.941278   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:58.946486   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:59.013426   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:59.138295   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:59.441550   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:59.445331   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:59.513318   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:59.638348   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:59.941585   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:59.946220   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:00.014210   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:00.138903   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:00.441113   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:00.446761   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:00.513876   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:00.640480   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:00.940504   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:00.945664   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:01.014145   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:01.139636   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:01.441018   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:01.446232   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:01.514364   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:01.639354   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:01.941847   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:01.945697   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:02.013427   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:02.138657   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:02.447121   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:02.452314   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:02.514095   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:02.639822   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:02.942774   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:02.951263   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:03.014003   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:03.139683   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:03.441826   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:03.446139   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:03.514693   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:03.638724   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:03.943020   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:03.946822   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:04.013828   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:04.139433   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:04.441086   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:04.447323   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:04.515602   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:04.641817   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:04.943687   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:04.954339   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:05.013191   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:05.137903   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:05.441578   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:05.445629   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:05.513531   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:05.638493   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:05.958599   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:05.958750   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:06.014059   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:06.139515   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:06.442392   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:06.449111   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:06.515214   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:06.639838   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:06.943109   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:06.946613   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:07.017502   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:07.138996   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:07.440574   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:07.445317   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:07.513449   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:07.639145   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:07.953378   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:07.953536   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:08.013811   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:08.138960   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:08.713266   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:08.713724   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:08.714290   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:08.714445   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:08.943516   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:08.948070   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:09.013504   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:09.139593   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:09.442688   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:09.446923   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:09.513610   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:09.638843   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:09.940651   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:09.945778   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:10.013911   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:10.139012   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:10.440803   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:10.446476   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:10.761684   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:10.769449   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:10.941936   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:10.945829   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:11.013821   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:11.138738   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:11.441256   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:11.447008   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:11.513871   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:11.638665   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:11.952701   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:11.960975   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:12.018214   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:12.139356   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:12.441185   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:12.446335   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:12.513192   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:12.638127   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:12.941682   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:12.946268   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:13.013729   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:13.139454   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:13.440743   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:13.445985   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:13.517764   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:13.638912   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:13.942290   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:13.947821   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:14.014012   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:14.139485   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:14.441683   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:14.445681   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:14.513793   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:14.638925   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:14.941384   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:14.946649   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:15.147627   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:15.149431   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:15.440761   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:15.446473   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:15.516478   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:15.638528   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:15.941121   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:15.946975   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:16.013486   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:16.138455   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:16.441368   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:16.445303   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:16.514138   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:16.649113   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:16.942014   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:16.946324   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:17.013310   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:17.138797   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:17.548125   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:17.550771   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:17.553142   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:17.638890   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:17.941654   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:17.945723   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:18.017588   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:18.138957   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:18.441505   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:18.445779   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:18.514716   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:18.639155   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:18.941899   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:18.946031   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:19.013878   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:19.138875   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:19.441782   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:19.445566   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:19.514353   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:19.647577   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:19.941106   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:19.946421   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:20.017203   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:20.139306   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:20.441441   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:20.446338   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:20.513550   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:20.639121   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:20.942070   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:20.945767   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:21.013485   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:21.141237   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:21.441752   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:21.448683   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:21.513566   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:21.639813   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:21.941647   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:21.945745   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:22.013557   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:22.139139   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:22.442032   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:22.445494   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:22.513783   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:22.639418   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:22.941723   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:22.947366   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:23.014401   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:23.139642   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:23.442233   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:23.454145   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:23.542431   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:23.638907   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:23.941751   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:23.945697   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:24.013794   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:24.139275   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:24.441296   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:24.446058   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:24.514000   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:24.640403   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:24.942045   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:24.946277   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:25.014272   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:25.139640   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:25.442333   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:25.446189   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:25.513604   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:25.638616   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:25.941026   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:25.946637   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:26.014884   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:26.138692   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:26.441023   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:26.446078   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:26.514289   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:26.638085   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:26.941828   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:26.946621   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:27.014721   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:27.138687   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:27.441072   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:27.446479   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:27.514543   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:27.638569   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:27.941814   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:27.945704   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:28.014133   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:28.150804   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:28.440517   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:28.445587   22294 kapi.go:107] duration metric: took 57.003936809s to wait for kubernetes.io/minikube-addons=registry ...
	I0612 20:14:28.512953   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:28.638516   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:28.941173   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:29.013886   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:29.138672   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:29.441191   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:29.523412   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:29.639897   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:29.940283   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:30.014079   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:30.139757   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:30.440774   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:30.517943   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:30.638459   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:30.941668   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:31.014184   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:31.138395   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:31.441643   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:31.513821   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:31.641887   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:31.941347   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:32.014268   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:32.139415   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:32.441787   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:32.513346   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:32.640964   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:32.942162   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:33.014047   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:33.139659   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:33.442093   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:33.514084   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:33.638871   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:33.941251   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:34.013569   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:34.138392   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:34.441925   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:34.527586   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:34.641739   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:34.940985   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:35.014913   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:35.139837   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:35.441387   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:35.515881   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:35.639895   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:35.942628   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:36.012597   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:36.138674   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:36.444605   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:36.514418   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:36.638222   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:36.941765   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:37.013241   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:37.138652   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:37.441434   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:37.513823   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:37.638579   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:37.941303   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:38.018216   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:38.138765   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:38.450258   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:38.515535   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:38.640664   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:38.944410   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:39.014738   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:39.139967   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:39.441411   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:39.514352   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:39.648884   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:39.942031   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:40.013420   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:40.138683   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:40.441351   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:40.513859   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:40.639624   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:40.941763   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:41.013022   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:41.139219   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:41.442004   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:41.512698   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:41.638306   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:41.941915   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:42.014206   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:42.140001   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:42.441642   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:42.513394   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:42.638585   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:42.945585   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:43.431947   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:43.438693   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:43.444440   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:43.513758   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:43.639210   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:43.943934   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:44.013554   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:44.138319   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:44.441609   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:44.513165   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:44.640206   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:44.941191   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:45.014426   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:45.139340   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:45.443215   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:45.513937   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:45.639045   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:46.128891   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:46.129113   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:46.142407   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:46.443217   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:46.513512   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:46.660955   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:46.941300   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:47.015201   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:47.147459   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:47.442562   22294 kapi.go:107] duration metric: took 1m16.007662934s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0612 20:14:47.513986   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:47.647004   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:48.013416   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:48.143071   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:48.513973   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:48.639573   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:49.013354   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:49.138390   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:49.514067   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:49.639269   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:50.014268   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:50.139479   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:50.513027   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:50.639459   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:51.014047   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:51.140498   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:51.513279   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:51.640851   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:52.014551   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:52.145705   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:52.514040   22294 kapi.go:107] duration metric: took 1m17.004104031s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0612 20:14:52.515934   22294 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-899843 cluster.
	I0612 20:14:52.517431   22294 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0612 20:14:52.518834   22294 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0612 20:14:52.642250   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:53.138957   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:53.640128   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:54.140417   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:54.638464   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:55.141403   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:55.638957   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:56.140084   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:56.639348   22294 kapi.go:107] duration metric: took 1m23.006400375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0612 20:14:56.641302   22294 out.go:177] * Enabled addons: helm-tiller, storage-provisioner, cloud-spanner, yakd, ingress-dns, inspektor-gadget, nvidia-device-plugin, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0612 20:14:56.642492   22294 addons.go:510] duration metric: took 1m34.573770903s for enable addons: enabled=[helm-tiller storage-provisioner cloud-spanner yakd ingress-dns inspektor-gadget nvidia-device-plugin metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0612 20:14:56.642526   22294 start.go:245] waiting for cluster config update ...
	I0612 20:14:56.642541   22294 start.go:254] writing updated cluster config ...
	I0612 20:14:56.642775   22294 ssh_runner.go:195] Run: rm -f paused
	I0612 20:14:56.693145   22294 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 20:14:56.694687   22294 out.go:177] * Done! kubectl is now configured to use "addons-899843" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.708109583Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718223478708076281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7aef1f8e-b0ba-4666-8782-5e425ab53a63 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.709260680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6161bce4-826e-4142-bb00-36be0d94766a name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.709338566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6161bce4-826e-4142-bb00-36be0d94766a name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.709829428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3f50da1dc82c3bdc5c4aee9cbe33faf413edf00ec08a86f07293581216d844,PodSandboxId:897b715d5540fce6bfb92cdd4e8e348fd89fe3b9a28ca69d862bb222805813cc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718223473051090169,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-kbtl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3c060ce-d46f-4a37-b318-985519591838,},Annotations:map[string]string{io.kubernetes.container.hash: 4c8d702,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd23008db0bf6f352b7240f729961b64b1e163658cf859b110036cea0b36343,PodSandboxId:a4ce8e3e4607485156cc665da1652d4c57412b86ed986db2939f0b0773956e1e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718223331979871665,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63c525be-66b7-432d-b1ae-2f835c9880fb,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1acc993f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a9f043f430d9bc9b333afe30bad2c4d0fadbd3362a0a47995d93d80a596fdf,PodSandboxId:33fa45c3c5b80da849fd42f7b08ce8abedcb3c4a8c98495f8b4ec4de72270644,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718223303780808234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-2hfkx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c88103f0-de17-4f17-a1dd-fa97f936c891,},Annotations:map[string]string{io.kubernetes.container.hash: 81bd51d0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d4b4e6a74844aa3fb50a9b67334de1ccc7db3684015519cc4309f6862b0350,PodSandboxId:873613a097a7909e0e77bac97e43f11f54296876dad008e182b2f00acaa5f6e6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718223291588963247,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-68z9r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fb378bcc-ffc3-427d-8d9c-3d4e10666a6f,},Annotations:map[string]string{io.kubernetes.container.hash: e1cd2245,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fa06f1bdd4a48a9b93aa90febc4db617b0ecda48739d7f3565bb2c002addf2,PodSandboxId:53cd0d2a4c065eaf125fb1cfb8c45ab274a5b17beaec053d94de7aa99d299333,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718223273501343142,Labels:map[string
]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z4t6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6005e6d0-ecbb-4369-a23e-a8f1138ef240,},Annotations:map[string]string{io.kubernetes.container.hash: 44174f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8579183a5e6a5a33d78933ba88a50c931b668d00923a05e9f82f1ea19f15fbc,PodSandboxId:524ce9e0f6d8e1afde4d7292d9126a88fc4449beb806d3e446a05aa3aca48898,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1
718223272153965283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b7gmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16ad7cae-c236-40c4-83db-9a47d9d59cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e452470,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d4dd138630826914efef88030e35569cd5d20f0b2197c3bcdded7e1beaa4eb,PodSandboxId:383811ae9f7055532f19ae5088244cd03a0bb0990fbb1532a48364ea60d890fe,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,C
reatedAt:1718223257660480529,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-mwtps,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a01c7a18-474f-45e3-906d-4e7b54800ba0,},Annotations:map[string]string{io.kubernetes.container.hash: fe6613a5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e20f759a6abff3ddce914eaf3504b894db1e7b7f70afe41e69d452e5fc1dfe3,PodSandboxId:c49212b8d6af6f3d2a1b8fb049683c74fe9ee6ff60e6af3a96335877190cb1c9,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718223246450294727,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-g6s5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27,},Annotations:map[string]string{io.kubernetes.container.hash: 5669584c,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9757ad6bc984243b22d2f31c4395538db3da62772209d217bf69cae679a63a,PodSandboxId:ddf8ae86868ac83ae5e3874de4f61780a1c39e617b553fcb2d35426d0f88a699,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718223209119052384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa128d9-0268-4ed7-9ba8-a3405add5dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 34461c91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb2eb9b48d57af25f2941f433f8710963ad414fa4886b1ecb969e2b098189f9,PodSandboxId:5702f8892f7515b81c0766a74d27a0032b536ec4957822a00fa534a2f1a06013,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674
fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718223205859284625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-whsws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad628dac-001d-4531-89fd-33629dcc54cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d7fd139,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c28efa5649762365aaf662619e5ef12712149626320de929ff8f3d
0913b91,PodSandboxId:3cde88d922a86baa00b3b490e2f52a80f9ace76f2d5aa8dc497a477acbbf7435,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718223203479106836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbbmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07785176-2ce1-4304-992e-8962b08939db,},Annotations:map[string]string{io.kubernetes.container.hash: e1528d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d5a6b5dc6138a6fe7531c084808d8d1872a0a5bad983b681b1dea0b1283c97,PodSandboxId:80b1160c82cc
df458c755a43f93905583a5e90d04d49a4b7fdc59afe4508e485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718223183845513001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87bd34de49164d7e23d3bd85d31e57a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d11557b4b02db631833f6cf99c4c112b3830f7f51a7c6df64e2b87f28c3dbb36,PodSandboxId:a39da441a9fa91e95f11c5f46b31b
8228f32e202998f1d4340a08e23e02ead01,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718223183815641904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478ea678f98fdcf850e28e6b8d10601f,},Annotations:map[string]string{io.kubernetes.container.hash: f5425390,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df6fd69f56389dcb1fb1abcd816b7212dccc260e9e123a6a0582bb35082f34d,PodSandboxId:76cfdcd865926bb5ab09cff69860418be695006b5268ba
ad8ab4a00d44c78b5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718223183730729809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd228d1130bcad7d53d31f82588ba53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a312cd5dbab5c630a6d9070588273ef333b2e11e4341e8003d515698a4f42c8d,PodSandboxId:4e67a4ce941c1583ba92539c39a261b550a6
b8860c438989a2d314acc04c1250,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718223183717163642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27583d060d65b458ede39de8e114234,},Annotations:map[string]string{io.kubernetes.container.hash: 2184cd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6161bce4-826e-4142-bb00-36be0d94766a name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.749908347Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e356ba64-14be-4136-b198-12867f5fefdf name=/runtime.v1.RuntimeService/Version
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.750009528Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e356ba64-14be-4136-b198-12867f5fefdf name=/runtime.v1.RuntimeService/Version
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.751041418Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6380163c-d5cd-41c3-8cb7-cbcad7ef39ef name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.752597108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718223478752567798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6380163c-d5cd-41c3-8cb7-cbcad7ef39ef name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.753591896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37d4a187-ef53-46d6-a73d-ac0886f54fbd name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.753649472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37d4a187-ef53-46d6-a73d-ac0886f54fbd name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.754124184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3f50da1dc82c3bdc5c4aee9cbe33faf413edf00ec08a86f07293581216d844,PodSandboxId:897b715d5540fce6bfb92cdd4e8e348fd89fe3b9a28ca69d862bb222805813cc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718223473051090169,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-kbtl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3c060ce-d46f-4a37-b318-985519591838,},Annotations:map[string]string{io.kubernetes.container.hash: 4c8d702,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd23008db0bf6f352b7240f729961b64b1e163658cf859b110036cea0b36343,PodSandboxId:a4ce8e3e4607485156cc665da1652d4c57412b86ed986db2939f0b0773956e1e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718223331979871665,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63c525be-66b7-432d-b1ae-2f835c9880fb,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1acc993f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a9f043f430d9bc9b333afe30bad2c4d0fadbd3362a0a47995d93d80a596fdf,PodSandboxId:33fa45c3c5b80da849fd42f7b08ce8abedcb3c4a8c98495f8b4ec4de72270644,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718223303780808234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-2hfkx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c88103f0-de17-4f17-a1dd-fa97f936c891,},Annotations:map[string]string{io.kubernetes.container.hash: 81bd51d0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d4b4e6a74844aa3fb50a9b67334de1ccc7db3684015519cc4309f6862b0350,PodSandboxId:873613a097a7909e0e77bac97e43f11f54296876dad008e182b2f00acaa5f6e6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718223291588963247,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-68z9r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fb378bcc-ffc3-427d-8d9c-3d4e10666a6f,},Annotations:map[string]string{io.kubernetes.container.hash: e1cd2245,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fa06f1bdd4a48a9b93aa90febc4db617b0ecda48739d7f3565bb2c002addf2,PodSandboxId:53cd0d2a4c065eaf125fb1cfb8c45ab274a5b17beaec053d94de7aa99d299333,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718223273501343142,Labels:map[string
]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z4t6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6005e6d0-ecbb-4369-a23e-a8f1138ef240,},Annotations:map[string]string{io.kubernetes.container.hash: 44174f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8579183a5e6a5a33d78933ba88a50c931b668d00923a05e9f82f1ea19f15fbc,PodSandboxId:524ce9e0f6d8e1afde4d7292d9126a88fc4449beb806d3e446a05aa3aca48898,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1
718223272153965283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b7gmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16ad7cae-c236-40c4-83db-9a47d9d59cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e452470,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d4dd138630826914efef88030e35569cd5d20f0b2197c3bcdded7e1beaa4eb,PodSandboxId:383811ae9f7055532f19ae5088244cd03a0bb0990fbb1532a48364ea60d890fe,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,C
reatedAt:1718223257660480529,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-mwtps,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a01c7a18-474f-45e3-906d-4e7b54800ba0,},Annotations:map[string]string{io.kubernetes.container.hash: fe6613a5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e20f759a6abff3ddce914eaf3504b894db1e7b7f70afe41e69d452e5fc1dfe3,PodSandboxId:c49212b8d6af6f3d2a1b8fb049683c74fe9ee6ff60e6af3a96335877190cb1c9,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718223246450294727,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-g6s5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27,},Annotations:map[string]string{io.kubernetes.container.hash: 5669584c,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9757ad6bc984243b22d2f31c4395538db3da62772209d217bf69cae679a63a,PodSandboxId:ddf8ae86868ac83ae5e3874de4f61780a1c39e617b553fcb2d35426d0f88a699,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718223209119052384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa128d9-0268-4ed7-9ba8-a3405add5dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 34461c91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb2eb9b48d57af25f2941f433f8710963ad414fa4886b1ecb969e2b098189f9,PodSandboxId:5702f8892f7515b81c0766a74d27a0032b536ec4957822a00fa534a2f1a06013,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674
fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718223205859284625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-whsws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad628dac-001d-4531-89fd-33629dcc54cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d7fd139,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c28efa5649762365aaf662619e5ef12712149626320de929ff8f3d
0913b91,PodSandboxId:3cde88d922a86baa00b3b490e2f52a80f9ace76f2d5aa8dc497a477acbbf7435,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718223203479106836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbbmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07785176-2ce1-4304-992e-8962b08939db,},Annotations:map[string]string{io.kubernetes.container.hash: e1528d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d5a6b5dc6138a6fe7531c084808d8d1872a0a5bad983b681b1dea0b1283c97,PodSandboxId:80b1160c82cc
df458c755a43f93905583a5e90d04d49a4b7fdc59afe4508e485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718223183845513001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87bd34de49164d7e23d3bd85d31e57a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d11557b4b02db631833f6cf99c4c112b3830f7f51a7c6df64e2b87f28c3dbb36,PodSandboxId:a39da441a9fa91e95f11c5f46b31b
8228f32e202998f1d4340a08e23e02ead01,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718223183815641904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478ea678f98fdcf850e28e6b8d10601f,},Annotations:map[string]string{io.kubernetes.container.hash: f5425390,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df6fd69f56389dcb1fb1abcd816b7212dccc260e9e123a6a0582bb35082f34d,PodSandboxId:76cfdcd865926bb5ab09cff69860418be695006b5268ba
ad8ab4a00d44c78b5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718223183730729809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd228d1130bcad7d53d31f82588ba53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a312cd5dbab5c630a6d9070588273ef333b2e11e4341e8003d515698a4f42c8d,PodSandboxId:4e67a4ce941c1583ba92539c39a261b550a6
b8860c438989a2d314acc04c1250,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718223183717163642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27583d060d65b458ede39de8e114234,},Annotations:map[string]string{io.kubernetes.container.hash: 2184cd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37d4a187-ef53-46d6-a73d-ac0886f54fbd name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.788724142Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95de94f6-34ba-466b-af63-0711404303ff name=/runtime.v1.RuntimeService/Version
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.788809566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95de94f6-34ba-466b-af63-0711404303ff name=/runtime.v1.RuntimeService/Version
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.789630569Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1191a13c-7961-4b2d-a976-22e3cea9223c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.790821756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718223478790786176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1191a13c-7961-4b2d-a976-22e3cea9223c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.791281296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5da92a0-3161-4e27-a82b-71df918665e7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.791406514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5da92a0-3161-4e27-a82b-71df918665e7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.791728027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3f50da1dc82c3bdc5c4aee9cbe33faf413edf00ec08a86f07293581216d844,PodSandboxId:897b715d5540fce6bfb92cdd4e8e348fd89fe3b9a28ca69d862bb222805813cc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718223473051090169,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-kbtl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3c060ce-d46f-4a37-b318-985519591838,},Annotations:map[string]string{io.kubernetes.container.hash: 4c8d702,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd23008db0bf6f352b7240f729961b64b1e163658cf859b110036cea0b36343,PodSandboxId:a4ce8e3e4607485156cc665da1652d4c57412b86ed986db2939f0b0773956e1e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718223331979871665,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63c525be-66b7-432d-b1ae-2f835c9880fb,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1acc993f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a9f043f430d9bc9b333afe30bad2c4d0fadbd3362a0a47995d93d80a596fdf,PodSandboxId:33fa45c3c5b80da849fd42f7b08ce8abedcb3c4a8c98495f8b4ec4de72270644,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718223303780808234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-2hfkx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c88103f0-de17-4f17-a1dd-fa97f936c891,},Annotations:map[string]string{io.kubernetes.container.hash: 81bd51d0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d4b4e6a74844aa3fb50a9b67334de1ccc7db3684015519cc4309f6862b0350,PodSandboxId:873613a097a7909e0e77bac97e43f11f54296876dad008e182b2f00acaa5f6e6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718223291588963247,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-68z9r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fb378bcc-ffc3-427d-8d9c-3d4e10666a6f,},Annotations:map[string]string{io.kubernetes.container.hash: e1cd2245,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fa06f1bdd4a48a9b93aa90febc4db617b0ecda48739d7f3565bb2c002addf2,PodSandboxId:53cd0d2a4c065eaf125fb1cfb8c45ab274a5b17beaec053d94de7aa99d299333,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718223273501343142,Labels:map[string
]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z4t6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6005e6d0-ecbb-4369-a23e-a8f1138ef240,},Annotations:map[string]string{io.kubernetes.container.hash: 44174f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8579183a5e6a5a33d78933ba88a50c931b668d00923a05e9f82f1ea19f15fbc,PodSandboxId:524ce9e0f6d8e1afde4d7292d9126a88fc4449beb806d3e446a05aa3aca48898,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1
718223272153965283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b7gmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16ad7cae-c236-40c4-83db-9a47d9d59cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e452470,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d4dd138630826914efef88030e35569cd5d20f0b2197c3bcdded7e1beaa4eb,PodSandboxId:383811ae9f7055532f19ae5088244cd03a0bb0990fbb1532a48364ea60d890fe,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,C
reatedAt:1718223257660480529,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-mwtps,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a01c7a18-474f-45e3-906d-4e7b54800ba0,},Annotations:map[string]string{io.kubernetes.container.hash: fe6613a5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e20f759a6abff3ddce914eaf3504b894db1e7b7f70afe41e69d452e5fc1dfe3,PodSandboxId:c49212b8d6af6f3d2a1b8fb049683c74fe9ee6ff60e6af3a96335877190cb1c9,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718223246450294727,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-g6s5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27,},Annotations:map[string]string{io.kubernetes.container.hash: 5669584c,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9757ad6bc984243b22d2f31c4395538db3da62772209d217bf69cae679a63a,PodSandboxId:ddf8ae86868ac83ae5e3874de4f61780a1c39e617b553fcb2d35426d0f88a699,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718223209119052384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa128d9-0268-4ed7-9ba8-a3405add5dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 34461c91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb2eb9b48d57af25f2941f433f8710963ad414fa4886b1ecb969e2b098189f9,PodSandboxId:5702f8892f7515b81c0766a74d27a0032b536ec4957822a00fa534a2f1a06013,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674
fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718223205859284625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-whsws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad628dac-001d-4531-89fd-33629dcc54cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d7fd139,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c28efa5649762365aaf662619e5ef12712149626320de929ff8f3d
0913b91,PodSandboxId:3cde88d922a86baa00b3b490e2f52a80f9ace76f2d5aa8dc497a477acbbf7435,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718223203479106836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbbmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07785176-2ce1-4304-992e-8962b08939db,},Annotations:map[string]string{io.kubernetes.container.hash: e1528d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d5a6b5dc6138a6fe7531c084808d8d1872a0a5bad983b681b1dea0b1283c97,PodSandboxId:80b1160c82cc
df458c755a43f93905583a5e90d04d49a4b7fdc59afe4508e485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718223183845513001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87bd34de49164d7e23d3bd85d31e57a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d11557b4b02db631833f6cf99c4c112b3830f7f51a7c6df64e2b87f28c3dbb36,PodSandboxId:a39da441a9fa91e95f11c5f46b31b
8228f32e202998f1d4340a08e23e02ead01,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718223183815641904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478ea678f98fdcf850e28e6b8d10601f,},Annotations:map[string]string{io.kubernetes.container.hash: f5425390,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df6fd69f56389dcb1fb1abcd816b7212dccc260e9e123a6a0582bb35082f34d,PodSandboxId:76cfdcd865926bb5ab09cff69860418be695006b5268ba
ad8ab4a00d44c78b5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718223183730729809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd228d1130bcad7d53d31f82588ba53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a312cd5dbab5c630a6d9070588273ef333b2e11e4341e8003d515698a4f42c8d,PodSandboxId:4e67a4ce941c1583ba92539c39a261b550a6
b8860c438989a2d314acc04c1250,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718223183717163642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27583d060d65b458ede39de8e114234,},Annotations:map[string]string{io.kubernetes.container.hash: 2184cd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5da92a0-3161-4e27-a82b-71df918665e7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.830873821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74a23bb5-d046-43c7-820c-cd75f534ddc3 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.830947768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74a23bb5-d046-43c7-820c-cd75f534ddc3 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.832048194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec24c52a-c71a-4104-8a2d-910a739cf785 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.833524093Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718223478833446184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec24c52a-c71a-4104-8a2d-910a739cf785 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.834202822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6cecd1e-a6ac-4997-ab78-0b459d90d0fb name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.834279204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6cecd1e-a6ac-4997-ab78-0b459d90d0fb name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:17:58 addons-899843 crio[685]: time="2024-06-12 20:17:58.834677754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3f50da1dc82c3bdc5c4aee9cbe33faf413edf00ec08a86f07293581216d844,PodSandboxId:897b715d5540fce6bfb92cdd4e8e348fd89fe3b9a28ca69d862bb222805813cc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718223473051090169,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-kbtl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3c060ce-d46f-4a37-b318-985519591838,},Annotations:map[string]string{io.kubernetes.container.hash: 4c8d702,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd23008db0bf6f352b7240f729961b64b1e163658cf859b110036cea0b36343,PodSandboxId:a4ce8e3e4607485156cc665da1652d4c57412b86ed986db2939f0b0773956e1e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718223331979871665,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63c525be-66b7-432d-b1ae-2f835c9880fb,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1acc993f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a9f043f430d9bc9b333afe30bad2c4d0fadbd3362a0a47995d93d80a596fdf,PodSandboxId:33fa45c3c5b80da849fd42f7b08ce8abedcb3c4a8c98495f8b4ec4de72270644,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718223303780808234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-2hfkx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c88103f0-de17-4f17-a1dd-fa97f936c891,},Annotations:map[string]string{io.kubernetes.container.hash: 81bd51d0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d4b4e6a74844aa3fb50a9b67334de1ccc7db3684015519cc4309f6862b0350,PodSandboxId:873613a097a7909e0e77bac97e43f11f54296876dad008e182b2f00acaa5f6e6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718223291588963247,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-68z9r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fb378bcc-ffc3-427d-8d9c-3d4e10666a6f,},Annotations:map[string]string{io.kubernetes.container.hash: e1cd2245,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fa06f1bdd4a48a9b93aa90febc4db617b0ecda48739d7f3565bb2c002addf2,PodSandboxId:53cd0d2a4c065eaf125fb1cfb8c45ab274a5b17beaec053d94de7aa99d299333,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718223273501343142,Labels:map[string
]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z4t6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6005e6d0-ecbb-4369-a23e-a8f1138ef240,},Annotations:map[string]string{io.kubernetes.container.hash: 44174f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8579183a5e6a5a33d78933ba88a50c931b668d00923a05e9f82f1ea19f15fbc,PodSandboxId:524ce9e0f6d8e1afde4d7292d9126a88fc4449beb806d3e446a05aa3aca48898,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1
718223272153965283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b7gmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16ad7cae-c236-40c4-83db-9a47d9d59cd1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e452470,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d4dd138630826914efef88030e35569cd5d20f0b2197c3bcdded7e1beaa4eb,PodSandboxId:383811ae9f7055532f19ae5088244cd03a0bb0990fbb1532a48364ea60d890fe,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,C
reatedAt:1718223257660480529,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-mwtps,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a01c7a18-474f-45e3-906d-4e7b54800ba0,},Annotations:map[string]string{io.kubernetes.container.hash: fe6613a5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e20f759a6abff3ddce914eaf3504b894db1e7b7f70afe41e69d452e5fc1dfe3,PodSandboxId:c49212b8d6af6f3d2a1b8fb049683c74fe9ee6ff60e6af3a96335877190cb1c9,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718223246450294727,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-g6s5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27,},Annotations:map[string]string{io.kubernetes.container.hash: 5669584c,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9757ad6bc984243b22d2f31c4395538db3da62772209d217bf69cae679a63a,PodSandboxId:ddf8ae86868ac83ae5e3874de4f61780a1c39e617b553fcb2d35426d0f88a699,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718223209119052384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa128d9-0268-4ed7-9ba8-a3405add5dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 34461c91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb2eb9b48d57af25f2941f433f8710963ad414fa4886b1ecb969e2b098189f9,PodSandboxId:5702f8892f7515b81c0766a74d27a0032b536ec4957822a00fa534a2f1a06013,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674
fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718223205859284625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-whsws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad628dac-001d-4531-89fd-33629dcc54cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d7fd139,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c28efa5649762365aaf662619e5ef12712149626320de929ff8f3d
0913b91,PodSandboxId:3cde88d922a86baa00b3b490e2f52a80f9ace76f2d5aa8dc497a477acbbf7435,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718223203479106836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbbmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07785176-2ce1-4304-992e-8962b08939db,},Annotations:map[string]string{io.kubernetes.container.hash: e1528d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d5a6b5dc6138a6fe7531c084808d8d1872a0a5bad983b681b1dea0b1283c97,PodSandboxId:80b1160c82cc
df458c755a43f93905583a5e90d04d49a4b7fdc59afe4508e485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718223183845513001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87bd34de49164d7e23d3bd85d31e57a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d11557b4b02db631833f6cf99c4c112b3830f7f51a7c6df64e2b87f28c3dbb36,PodSandboxId:a39da441a9fa91e95f11c5f46b31b
8228f32e202998f1d4340a08e23e02ead01,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718223183815641904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478ea678f98fdcf850e28e6b8d10601f,},Annotations:map[string]string{io.kubernetes.container.hash: f5425390,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df6fd69f56389dcb1fb1abcd816b7212dccc260e9e123a6a0582bb35082f34d,PodSandboxId:76cfdcd865926bb5ab09cff69860418be695006b5268ba
ad8ab4a00d44c78b5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718223183730729809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd228d1130bcad7d53d31f82588ba53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a312cd5dbab5c630a6d9070588273ef333b2e11e4341e8003d515698a4f42c8d,PodSandboxId:4e67a4ce941c1583ba92539c39a261b550a6
b8860c438989a2d314acc04c1250,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718223183717163642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27583d060d65b458ede39de8e114234,},Annotations:map[string]string{io.kubernetes.container.hash: 2184cd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6cecd1e-a6ac-4997-ab78-0b459d90d0fb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	af3f50da1dc82       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      5 seconds ago       Running             hello-world-app           0                   897b715d5540f       hello-world-app-86c47465fc-kbtl7
	1bd23008db0bf       docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa                              2 minutes ago       Running             nginx                     0                   a4ce8e3e46074       nginx
	35a9f043f430d       ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5                        2 minutes ago       Running             headlamp                  0                   33fa45c3c5b80       headlamp-7fc69f7444-2hfkx
	87d4b4e6a7484       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   873613a097a79       gcp-auth-5db96cd9b4-68z9r
	a1fa06f1bdd4a       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             3 minutes ago       Exited              patch                     1                   53cd0d2a4c065       ingress-nginx-admission-patch-z4t6h
	b8579183a5e6a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   524ce9e0f6d8e       ingress-nginx-admission-create-b7gmg
	71d4dd1386308       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   383811ae9f705       yakd-dashboard-5ddbf7d777-mwtps
	7e20f759a6abf       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   c49212b8d6af6       metrics-server-c59844bb4-g6s5d
	8a9757ad6bc98       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   ddf8ae86868ac       storage-provisioner
	bbb2eb9b48d57       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   5702f8892f751       coredns-7db6d8ff4d-whsws
	af9c28efa5649       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                             4 minutes ago       Running             kube-proxy                0                   3cde88d922a86       kube-proxy-rbbmx
	c9d5a6b5dc613       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                             4 minutes ago       Running             kube-scheduler            0                   80b1160c82ccd       kube-scheduler-addons-899843
	d11557b4b02db       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                             4 minutes ago       Running             kube-apiserver            0                   a39da441a9fa9       kube-apiserver-addons-899843
	3df6fd69f5638       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                             4 minutes ago       Running             kube-controller-manager   0                   76cfdcd865926       kube-controller-manager-addons-899843
	a312cd5dbab5c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   4e67a4ce941c1       etcd-addons-899843
	
	
	==> coredns [bbb2eb9b48d57af25f2941f433f8710963ad414fa4886b1ecb969e2b098189f9] <==
	[INFO] 10.244.0.9:44984 - 63898 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001537668s
	[INFO] 10.244.0.9:55734 - 33224 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009839s
	[INFO] 10.244.0.9:55734 - 6858 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114016s
	[INFO] 10.244.0.9:51890 - 8270 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188916s
	[INFO] 10.244.0.9:51890 - 10828 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00047844s
	[INFO] 10.244.0.9:46906 - 24528 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000180967s
	[INFO] 10.244.0.9:46906 - 40658 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000199381s
	[INFO] 10.244.0.9:39398 - 25333 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000054258s
	[INFO] 10.244.0.9:39398 - 9675 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00007112s
	[INFO] 10.244.0.9:45545 - 35049 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102468s
	[INFO] 10.244.0.9:45545 - 39403 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000221204s
	[INFO] 10.244.0.9:60573 - 22396 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035317s
	[INFO] 10.244.0.9:60573 - 44914 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004122s
	[INFO] 10.244.0.9:41404 - 38014 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000155102s
	[INFO] 10.244.0.9:41404 - 8048 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000307046s
	[INFO] 10.244.0.22:53845 - 34394 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000296173s
	[INFO] 10.244.0.22:50119 - 31106 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00011825s
	[INFO] 10.244.0.22:60392 - 4096 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122548s
	[INFO] 10.244.0.22:53159 - 29292 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000058811s
	[INFO] 10.244.0.22:60239 - 41880 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010106s
	[INFO] 10.244.0.22:53011 - 5038 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000199657s
	[INFO] 10.244.0.22:47815 - 7584 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000478038s
	[INFO] 10.244.0.22:43708 - 60949 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000770248s
	[INFO] 10.244.0.25:42302 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00036584s
	[INFO] 10.244.0.25:39976 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000210016s
	
	
	==> describe nodes <==
	Name:               addons-899843
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-899843
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=addons-899843
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T20_13_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-899843
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:13:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-899843
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:17:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:16:13 +0000   Wed, 12 Jun 2024 20:13:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:16:13 +0000   Wed, 12 Jun 2024 20:13:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:16:13 +0000   Wed, 12 Jun 2024 20:13:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:16:13 +0000   Wed, 12 Jun 2024 20:13:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    addons-899843
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7720e2d2b9d4fac92e9d34a7e19b889
	  System UUID:                c7720e2d-2b9d-4fac-92e9-d34a7e19b889
	  Boot ID:                    d7e9cbad-9bfc-4e95-97e5-e442875e4a37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-kbtl7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-5db96cd9b4-68z9r                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  headlamp                    headlamp-7fc69f7444-2hfkx                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-7db6d8ff4d-whsws                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m37s
	  kube-system                 etcd-addons-899843                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-apiserver-addons-899843             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-controller-manager-addons-899843    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-proxy-rbbmx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-scheduler-addons-899843             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 metrics-server-c59844bb4-g6s5d           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m32s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-mwtps          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m34s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m57s)  kubelet          Node addons-899843 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m57s)  kubelet          Node addons-899843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m57s)  kubelet          Node addons-899843 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m50s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m50s                  kubelet          Node addons-899843 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m50s                  kubelet          Node addons-899843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m50s                  kubelet          Node addons-899843 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m49s                  kubelet          Node addons-899843 status is now: NodeReady
	  Normal  RegisteredNode           4m38s                  node-controller  Node addons-899843 event: Registered Node addons-899843 in Controller
	
	
	==> dmesg <==
	[  +5.001851] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.098927] kauditd_printk_skb: 120 callbacks suppressed
	[  +5.087273] kauditd_printk_skb: 76 callbacks suppressed
	[ +13.652101] kauditd_printk_skb: 19 callbacks suppressed
	[Jun12 20:14] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.364142] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.275660] kauditd_printk_skb: 23 callbacks suppressed
	[ +10.110740] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.736161] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.016679] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.728139] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.336233] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.162446] kauditd_printk_skb: 12 callbacks suppressed
	[Jun12 20:15] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.213681] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.878218] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.144634] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.094372] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.155851] kauditd_printk_skb: 36 callbacks suppressed
	[ +23.693912] kauditd_printk_skb: 5 callbacks suppressed
	[Jun12 20:16] kauditd_printk_skb: 8 callbacks suppressed
	[ +30.594921] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.854653] kauditd_printk_skb: 33 callbacks suppressed
	[Jun12 20:17] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.214368] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a312cd5dbab5c630a6d9070588273ef333b2e11e4341e8003d515698a4f42c8d] <==
	{"level":"info","ts":"2024-06-12T20:14:46.105479Z","caller":"traceutil/trace.go:171","msg":"trace[159866388] linearizableReadLoop","detail":"{readStateIndex:1181; appliedIndex:1180; }","duration":"376.741899ms","start":"2024-06-12T20:14:45.728722Z","end":"2024-06-12T20:14:46.105464Z","steps":["trace[159866388] 'read index received'  (duration: 376.486582ms)","trace[159866388] 'applied index is now lower than readState.Index'  (duration: 254.536µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-12T20:14:46.105567Z","caller":"traceutil/trace.go:171","msg":"trace[995288748] transaction","detail":"{read_only:false; response_revision:1146; number_of_response:1; }","duration":"395.553724ms","start":"2024-06-12T20:14:45.710007Z","end":"2024-06-12T20:14:46.105561Z","steps":["trace[995288748] 'process raft request'  (duration: 395.253738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:14:46.105648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:14:45.709995Z","time spent":"395.591476ms","remote":"127.0.0.1:37460","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-ljwsf.17d85af8725877a6\" mod_revision:1140 > success:<request_put:<key:\"/registry/events/gadget/gadget-ljwsf.17d85af8725877a6\" value_size:693 lease:2691622117465690094 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-ljwsf.17d85af8725877a6\" > >"}
	{"level":"warn","ts":"2024-06-12T20:14:46.10593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"377.209447ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-06-12T20:14:46.105956Z","caller":"traceutil/trace.go:171","msg":"trace[798071367] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1146; }","duration":"377.254431ms","start":"2024-06-12T20:14:45.728693Z","end":"2024-06-12T20:14:46.105948Z","steps":["trace[798071367] 'agreement among raft nodes before linearized reading'  (duration: 377.118408ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:14:46.105975Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:14:45.728681Z","time spent":"377.289922ms","remote":"127.0.0.1:37536","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-06-12T20:14:46.106149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.038664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-06-12T20:14:46.106166Z","caller":"traceutil/trace.go:171","msg":"trace[168276800] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1146; }","duration":"186.073519ms","start":"2024-06-12T20:14:45.920088Z","end":"2024-06-12T20:14:46.106161Z","steps":["trace[168276800] 'agreement among raft nodes before linearized reading'  (duration: 186.005157ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:14:46.106631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.343984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-06-12T20:14:46.10668Z","caller":"traceutil/trace.go:171","msg":"trace[1284212033] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1146; }","duration":"113.416319ms","start":"2024-06-12T20:14:45.993256Z","end":"2024-06-12T20:14:46.106672Z","steps":["trace[1284212033] 'agreement among raft nodes before linearized reading'  (duration: 113.047971ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:14:57.400401Z","caller":"traceutil/trace.go:171","msg":"trace[1762200910] transaction","detail":"{read_only:false; response_revision:1220; number_of_response:1; }","duration":"105.985912ms","start":"2024-06-12T20:14:57.294341Z","end":"2024-06-12T20:14:57.400327Z","steps":["trace[1762200910] 'process raft request'  (duration: 105.377641ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:15:01.706419Z","caller":"traceutil/trace.go:171","msg":"trace[655074709] transaction","detail":"{read_only:false; response_revision:1259; number_of_response:1; }","duration":"125.651414ms","start":"2024-06-12T20:15:01.580755Z","end":"2024-06-12T20:15:01.706407Z","steps":["trace[655074709] 'process raft request'  (duration: 125.501366ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:15:03.495171Z","caller":"traceutil/trace.go:171","msg":"trace[1327267216] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"128.856372ms","start":"2024-06-12T20:15:03.366298Z","end":"2024-06-12T20:15:03.495154Z","steps":["trace[1327267216] 'process raft request'  (duration: 128.751804ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:16:00.390923Z","caller":"traceutil/trace.go:171","msg":"trace[2004317318] transaction","detail":"{read_only:false; response_revision:1598; number_of_response:1; }","duration":"408.786624ms","start":"2024-06-12T20:15:59.982115Z","end":"2024-06-12T20:16:00.390902Z","steps":["trace[2004317318] 'process raft request'  (duration: 408.688703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:16:00.391094Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:15:59.982101Z","time spent":"408.908653ms","remote":"127.0.0.1:52670","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1592 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-06-12T20:16:00.391536Z","caller":"traceutil/trace.go:171","msg":"trace[1448742879] linearizableReadLoop","detail":"{readStateIndex:1654; appliedIndex:1654; }","duration":"354.07715ms","start":"2024-06-12T20:16:00.037441Z","end":"2024-06-12T20:16:00.391518Z","steps":["trace[1448742879] 'read index received'  (duration: 354.069625ms)","trace[1448742879] 'applied index is now lower than readState.Index'  (duration: 6.611µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T20:16:00.391673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.223549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6032"}
	{"level":"info","ts":"2024-06-12T20:16:00.391731Z","caller":"traceutil/trace.go:171","msg":"trace[65560386] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1598; }","duration":"354.306674ms","start":"2024-06-12T20:16:00.037413Z","end":"2024-06-12T20:16:00.391719Z","steps":["trace[65560386] 'agreement among raft nodes before linearized reading'  (duration: 354.174219ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:16:00.391752Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:16:00.037398Z","time spent":"354.349686ms","remote":"127.0.0.1:37562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":6055,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-06-12T20:16:31.544433Z","caller":"traceutil/trace.go:171","msg":"trace[752769622] linearizableReadLoop","detail":"{readStateIndex:1761; appliedIndex:1760; }","duration":"202.08711ms","start":"2024-06-12T20:16:31.342248Z","end":"2024-06-12T20:16:31.544335Z","steps":["trace[752769622] 'read index received'  (duration: 201.937756ms)","trace[752769622] 'applied index is now lower than readState.Index'  (duration: 148.941µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-12T20:16:31.544544Z","caller":"traceutil/trace.go:171","msg":"trace[1527852028] transaction","detail":"{read_only:false; response_revision:1698; number_of_response:1; }","duration":"213.010261ms","start":"2024-06-12T20:16:31.331514Z","end":"2024-06-12T20:16:31.544524Z","steps":["trace[1527852028] 'process raft request'  (duration: 212.707631ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:16:31.544816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.495708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-12T20:16:31.544876Z","caller":"traceutil/trace.go:171","msg":"trace[1467260830] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1698; }","duration":"202.640335ms","start":"2024-06-12T20:16:31.342221Z","end":"2024-06-12T20:16:31.544862Z","steps":["trace[1467260830] 'agreement among raft nodes before linearized reading'  (duration: 202.355357ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:16:31.544835Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.484473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6126"}
	{"level":"info","ts":"2024-06-12T20:16:31.545024Z","caller":"traceutil/trace.go:171","msg":"trace[2011878052] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1698; }","duration":"198.71668ms","start":"2024-06-12T20:16:31.346302Z","end":"2024-06-12T20:16:31.545019Z","steps":["trace[2011878052] 'agreement among raft nodes before linearized reading'  (duration: 198.449424ms)"],"step_count":1}
	
	
	==> gcp-auth [87d4b4e6a74844aa3fb50a9b67334de1ccc7db3684015519cc4309f6862b0350] <==
	2024/06/12 20:14:51 GCP Auth Webhook started!
	2024/06/12 20:14:57 Ready to marshal response ...
	2024/06/12 20:14:57 Ready to write response ...
	2024/06/12 20:14:57 Ready to marshal response ...
	2024/06/12 20:14:57 Ready to write response ...
	2024/06/12 20:14:57 Ready to marshal response ...
	2024/06/12 20:14:57 Ready to write response ...
	2024/06/12 20:15:01 Ready to marshal response ...
	2024/06/12 20:15:01 Ready to write response ...
	2024/06/12 20:15:07 Ready to marshal response ...
	2024/06/12 20:15:07 Ready to write response ...
	2024/06/12 20:15:14 Ready to marshal response ...
	2024/06/12 20:15:14 Ready to write response ...
	2024/06/12 20:15:14 Ready to marshal response ...
	2024/06/12 20:15:14 Ready to write response ...
	2024/06/12 20:15:27 Ready to marshal response ...
	2024/06/12 20:15:27 Ready to write response ...
	2024/06/12 20:15:27 Ready to marshal response ...
	2024/06/12 20:15:27 Ready to write response ...
	2024/06/12 20:15:53 Ready to marshal response ...
	2024/06/12 20:15:53 Ready to write response ...
	2024/06/12 20:16:23 Ready to marshal response ...
	2024/06/12 20:16:23 Ready to write response ...
	2024/06/12 20:17:48 Ready to marshal response ...
	2024/06/12 20:17:48 Ready to write response ...
	
	
	==> kernel <==
	 20:17:59 up 5 min,  0 users,  load average: 0.41, 1.04, 0.56
	Linux addons-899843 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d11557b4b02db631833f6cf99c4c112b3830f7f51a7c6df64e2b87f28c3dbb36] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0612 20:15:08.384000       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.176.248:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.176.248:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.176.248:443: connect: connection refused
	E0612 20:15:08.390501       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.176.248:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.176.248:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.176.248:443: connect: connection refused
	I0612 20:15:08.466294       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0612 20:15:21.456841       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0612 20:15:22.496198       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0612 20:15:27.031564       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0612 20:15:27.275413       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.48.92"}
	E0612 20:15:43.305008       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0612 20:16:08.103617       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0612 20:16:40.369801       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0612 20:16:40.370025       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0612 20:16:40.391605       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0612 20:16:40.391660       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0612 20:16:40.400276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0612 20:16:40.400340       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0612 20:16:40.407436       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0612 20:16:40.407510       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0612 20:16:40.449479       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0612 20:16:40.449585       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0612 20:16:40.483903       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W0612 20:16:41.400678       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0612 20:16:41.449707       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0612 20:16:41.468119       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0612 20:17:49.049627       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.234.57"}
	
	
	==> kube-controller-manager [3df6fd69f56389dcb1fb1abcd816b7212dccc260e9e123a6a0582bb35082f34d] <==
	I0612 20:16:52.303579       1 shared_informer.go:320] Caches are synced for garbage collector
	W0612 20:16:56.658560       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:16:56.658715       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:16:59.336786       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:16:59.336853       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:17:00.577158       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:17:00.577252       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:17:14.987214       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:17:14.987464       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:17:17.745605       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:17:17.745676       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:17:21.045058       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:17:21.045116       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:17:36.309750       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:17:36.309820       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:17:45.318631       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:17:45.318687       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0612 20:17:48.872772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="31.161828ms"
	I0612 20:17:48.899834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="26.99979ms"
	I0612 20:17:48.899912       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="30.099µs"
	I0612 20:17:50.887011       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0612 20:17:50.893229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="3.515µs"
	I0612 20:17:50.895908       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0612 20:17:53.287258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="12.693148ms"
	I0612 20:17:53.287575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="86.459µs"
	
	
	==> kube-proxy [af9c28efa5649762365aaf662619e5ef12712149626320de929ff8f3d0913b91] <==
	I0612 20:13:24.342635       1 server_linux.go:69] "Using iptables proxy"
	I0612 20:13:24.371568       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.248"]
	I0612 20:13:24.502205       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:13:24.502235       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:13:24.502250       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:13:24.516483       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:13:24.516656       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:13:24.516671       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:13:24.518303       1 config.go:192] "Starting service config controller"
	I0612 20:13:24.518317       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:13:24.518415       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:13:24.518421       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:13:24.518726       1 config.go:319] "Starting node config controller"
	I0612 20:13:24.518731       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:13:24.618851       1 shared_informer.go:320] Caches are synced for node config
	I0612 20:13:24.618885       1 shared_informer.go:320] Caches are synced for service config
	I0612 20:13:24.618913       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c9d5a6b5dc6138a6fe7531c084808d8d1872a0a5bad983b681b1dea0b1283c97] <==
	W0612 20:13:06.352517       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 20:13:06.352547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 20:13:06.352604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 20:13:06.352633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 20:13:07.227225       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0612 20:13:07.227451       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0612 20:13:07.256252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 20:13:07.256304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 20:13:07.317503       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 20:13:07.317649       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 20:13:07.388248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0612 20:13:07.388439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0612 20:13:07.388342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 20:13:07.388521       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 20:13:07.402810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 20:13:07.403112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 20:13:07.445227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0612 20:13:07.445274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0612 20:13:07.485454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 20:13:07.485502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 20:13:07.597681       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0612 20:13:07.597834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 20:13:07.661194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:13:07.661329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 20:13:09.027704       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 20:17:48 addons-899843 kubelet[1283]: I0612 20:17:48.877654    1283 memory_manager.go:354] "RemoveStaleState removing state" podUID="7350c859-7403-48dd-8f17-716af45a66e0" containerName="volume-snapshot-controller"
	Jun 12 20:17:48 addons-899843 kubelet[1283]: I0612 20:17:48.909620    1283 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d3c060ce-d46f-4a37-b318-985519591838-gcp-creds\") pod \"hello-world-app-86c47465fc-kbtl7\" (UID: \"d3c060ce-d46f-4a37-b318-985519591838\") " pod="default/hello-world-app-86c47465fc-kbtl7"
	Jun 12 20:17:48 addons-899843 kubelet[1283]: I0612 20:17:48.909790    1283 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v62gd\" (UniqueName: \"kubernetes.io/projected/d3c060ce-d46f-4a37-b318-985519591838-kube-api-access-v62gd\") pod \"hello-world-app-86c47465fc-kbtl7\" (UID: \"d3c060ce-d46f-4a37-b318-985519591838\") " pod="default/hello-world-app-86c47465fc-kbtl7"
	Jun 12 20:17:50 addons-899843 kubelet[1283]: I0612 20:17:50.018849    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjrpd\" (UniqueName: \"kubernetes.io/projected/fe4b4575-3547-4019-bc49-d7599aaaedc1-kube-api-access-vjrpd\") pod \"fe4b4575-3547-4019-bc49-d7599aaaedc1\" (UID: \"fe4b4575-3547-4019-bc49-d7599aaaedc1\") "
	Jun 12 20:17:50 addons-899843 kubelet[1283]: I0612 20:17:50.023049    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4b4575-3547-4019-bc49-d7599aaaedc1-kube-api-access-vjrpd" (OuterVolumeSpecName: "kube-api-access-vjrpd") pod "fe4b4575-3547-4019-bc49-d7599aaaedc1" (UID: "fe4b4575-3547-4019-bc49-d7599aaaedc1"). InnerVolumeSpecName "kube-api-access-vjrpd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 12 20:17:50 addons-899843 kubelet[1283]: I0612 20:17:50.120476    1283 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vjrpd\" (UniqueName: \"kubernetes.io/projected/fe4b4575-3547-4019-bc49-d7599aaaedc1-kube-api-access-vjrpd\") on node \"addons-899843\" DevicePath \"\""
	Jun 12 20:17:50 addons-899843 kubelet[1283]: I0612 20:17:50.220701    1283 scope.go:117] "RemoveContainer" containerID="a55939509e0bbd66489964090ec9e8590bff52c57281afd459f2ae4ee49bdf9a"
	Jun 12 20:17:50 addons-899843 kubelet[1283]: I0612 20:17:50.257557    1283 scope.go:117] "RemoveContainer" containerID="a55939509e0bbd66489964090ec9e8590bff52c57281afd459f2ae4ee49bdf9a"
	Jun 12 20:17:50 addons-899843 kubelet[1283]: E0612 20:17:50.258201    1283 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a55939509e0bbd66489964090ec9e8590bff52c57281afd459f2ae4ee49bdf9a\": container with ID starting with a55939509e0bbd66489964090ec9e8590bff52c57281afd459f2ae4ee49bdf9a not found: ID does not exist" containerID="a55939509e0bbd66489964090ec9e8590bff52c57281afd459f2ae4ee49bdf9a"
	Jun 12 20:17:50 addons-899843 kubelet[1283]: I0612 20:17:50.258231    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a55939509e0bbd66489964090ec9e8590bff52c57281afd459f2ae4ee49bdf9a"} err="failed to get container status \"a55939509e0bbd66489964090ec9e8590bff52c57281afd459f2ae4ee49bdf9a\": rpc error: code = NotFound desc = could not find container \"a55939509e0bbd66489964090ec9e8590bff52c57281afd459f2ae4ee49bdf9a\": container with ID starting with a55939509e0bbd66489964090ec9e8590bff52c57281afd459f2ae4ee49bdf9a not found: ID does not exist"
	Jun 12 20:17:51 addons-899843 kubelet[1283]: I0612 20:17:51.156798    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ad7cae-c236-40c4-83db-9a47d9d59cd1" path="/var/lib/kubelet/pods/16ad7cae-c236-40c4-83db-9a47d9d59cd1/volumes"
	Jun 12 20:17:51 addons-899843 kubelet[1283]: I0612 20:17:51.157761    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6005e6d0-ecbb-4369-a23e-a8f1138ef240" path="/var/lib/kubelet/pods/6005e6d0-ecbb-4369-a23e-a8f1138ef240/volumes"
	Jun 12 20:17:51 addons-899843 kubelet[1283]: I0612 20:17:51.158178    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe4b4575-3547-4019-bc49-d7599aaaedc1" path="/var/lib/kubelet/pods/fe4b4575-3547-4019-bc49-d7599aaaedc1/volumes"
	Jun 12 20:17:53 addons-899843 kubelet[1283]: I0612 20:17:53.273212    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-kbtl7" podStartSLOduration=1.715019469 podStartE2EDuration="5.2731772s" podCreationTimestamp="2024-06-12 20:17:48 +0000 UTC" firstStartedPulling="2024-06-12 20:17:49.478028175 +0000 UTC m=+280.500739924" lastFinishedPulling="2024-06-12 20:17:53.036185907 +0000 UTC m=+284.058897655" observedRunningTime="2024-06-12 20:17:53.272249858 +0000 UTC m=+284.294961626" watchObservedRunningTime="2024-06-12 20:17:53.2731772 +0000 UTC m=+284.295888966"
	Jun 12 20:17:54 addons-899843 kubelet[1283]: I0612 20:17:54.148113    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5d7g\" (UniqueName: \"kubernetes.io/projected/9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf-kube-api-access-l5d7g\") pod \"9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf\" (UID: \"9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf\") "
	Jun 12 20:17:54 addons-899843 kubelet[1283]: I0612 20:17:54.148200    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf-webhook-cert\") pod \"9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf\" (UID: \"9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf\") "
	Jun 12 20:17:54 addons-899843 kubelet[1283]: I0612 20:17:54.153520    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf" (UID: "9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 12 20:17:54 addons-899843 kubelet[1283]: I0612 20:17:54.153520    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf-kube-api-access-l5d7g" (OuterVolumeSpecName: "kube-api-access-l5d7g") pod "9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf" (UID: "9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf"). InnerVolumeSpecName "kube-api-access-l5d7g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 12 20:17:54 addons-899843 kubelet[1283]: I0612 20:17:54.249539    1283 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l5d7g\" (UniqueName: \"kubernetes.io/projected/9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf-kube-api-access-l5d7g\") on node \"addons-899843\" DevicePath \"\""
	Jun 12 20:17:54 addons-899843 kubelet[1283]: I0612 20:17:54.249574    1283 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf-webhook-cert\") on node \"addons-899843\" DevicePath \"\""
	Jun 12 20:17:54 addons-899843 kubelet[1283]: I0612 20:17:54.269780    1283 scope.go:117] "RemoveContainer" containerID="b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3"
	Jun 12 20:17:54 addons-899843 kubelet[1283]: I0612 20:17:54.297985    1283 scope.go:117] "RemoveContainer" containerID="b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3"
	Jun 12 20:17:54 addons-899843 kubelet[1283]: E0612 20:17:54.298601    1283 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3\": container with ID starting with b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3 not found: ID does not exist" containerID="b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3"
	Jun 12 20:17:54 addons-899843 kubelet[1283]: I0612 20:17:54.298631    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3"} err="failed to get container status \"b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3\": rpc error: code = NotFound desc = could not find container \"b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3\": container with ID starting with b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3 not found: ID does not exist"
	Jun 12 20:17:55 addons-899843 kubelet[1283]: I0612 20:17:55.154915    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf" path="/var/lib/kubelet/pods/9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf/volumes"
	
	
	==> storage-provisioner [8a9757ad6bc984243b22d2f31c4395538db3da62772209d217bf69cae679a63a] <==
	I0612 20:13:30.366230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0612 20:13:30.472184       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0612 20:13:30.472298       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0612 20:13:30.505156       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0612 20:13:30.506469       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-899843_6b6be76f-bfec-4661-b11f-f7c147a1abd8!
	I0612 20:13:30.508133       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"225df958-7d42-4f80-ad26-74574bae21bd", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-899843_6b6be76f-bfec-4661-b11f-f7c147a1abd8 became leader
	I0612 20:13:30.609532       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-899843_6b6be76f-bfec-4661-b11f-f7c147a1abd8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-899843 -n addons-899843
helpers_test.go:261: (dbg) Run:  kubectl --context addons-899843 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.15s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (318.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.635205ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-g6s5d" [4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005128886s
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (66.643442ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 2m3.664614714s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (63.222504ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 2m7.518250202s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (63.540166ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 2m14.158843117s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (64.943131ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 2m19.739784433s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (63.287956ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 2m32.564798682s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (68.272015ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 2m40.40795876s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (80.058174ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 3m10.791442924s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (61.422715ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 3m52.232497429s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (61.460784ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 4m33.455372853s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (62.911846ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 5m19.485856453s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (61.981084ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 6m44.219397684s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-899843 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-899843 top pods -n kube-system: exit status 1 (61.782621ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-whsws, age: 7m14.771609519s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-899843 -n addons-899843
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-899843 logs -n 25: (1.382770747s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:12 UTC |
	| delete  | -p download-only-740695                                                                     | download-only-740695 | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:12 UTC |
	| delete  | -p download-only-691398                                                                     | download-only-691398 | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:12 UTC |
	| delete  | -p download-only-740695                                                                     | download-only-740695 | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-323011 | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC |                     |
	|         | binary-mirror-323011                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40201                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-323011                                                                     | binary-mirror-323011 | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:12 UTC |
	| addons  | disable dashboard -p                                                                        | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC |                     |
	|         | addons-899843                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC |                     |
	|         | addons-899843                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-899843 --wait=true                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:12 UTC | 12 Jun 24 20:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:14 UTC | 12 Jun 24 20:14 UTC |
	|         | -p addons-899843                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-899843 addons disable                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-899843 ip                                                                            | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	| addons  | addons-899843 addons disable                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | -p addons-899843                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | addons-899843                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | addons-899843                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-899843 ssh cat                                                                       | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:15 UTC |
	|         | /opt/local-path-provisioner/pvc-0b5a2113-5bb0-41c3-b569-15c053bb7f98_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-899843 addons disable                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC | 12 Jun 24 20:16 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-899843 ssh curl -s                                                                   | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:15 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-899843 addons                                                                        | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:16 UTC | 12 Jun 24 20:16 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-899843 addons                                                                        | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:16 UTC | 12 Jun 24 20:16 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-899843 ip                                                                            | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:17 UTC | 12 Jun 24 20:17 UTC |
	| addons  | addons-899843 addons disable                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:17 UTC | 12 Jun 24 20:17 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-899843 addons disable                                                                | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:17 UTC | 12 Jun 24 20:17 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-899843 addons                                                                        | addons-899843        | jenkins | v1.33.1 | 12 Jun 24 20:20 UTC | 12 Jun 24 20:20 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 20:12:27
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 20:12:27.664136   22294 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:12:27.664256   22294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:12:27.664265   22294 out.go:304] Setting ErrFile to fd 2...
	I0612 20:12:27.664270   22294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:12:27.664477   22294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:12:27.665096   22294 out.go:298] Setting JSON to false
	I0612 20:12:27.665957   22294 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3293,"bootTime":1718219855,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 20:12:27.666014   22294 start.go:139] virtualization: kvm guest
	I0612 20:12:27.668220   22294 out.go:177] * [addons-899843] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 20:12:27.669642   22294 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 20:12:27.669592   22294 notify.go:220] Checking for updates...
	I0612 20:12:27.671320   22294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 20:12:27.672719   22294 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:12:27.674240   22294 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:12:27.675696   22294 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 20:12:27.677151   22294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 20:12:27.678737   22294 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 20:12:27.710480   22294 out.go:177] * Using the kvm2 driver based on user configuration
	I0612 20:12:27.711863   22294 start.go:297] selected driver: kvm2
	I0612 20:12:27.711878   22294 start.go:901] validating driver "kvm2" against <nil>
	I0612 20:12:27.711888   22294 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 20:12:27.712578   22294 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:12:27.712637   22294 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 20:12:27.728064   22294 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 20:12:27.728115   22294 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 20:12:27.728322   22294 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:12:27.728381   22294 cni.go:84] Creating CNI manager for ""
	I0612 20:12:27.728393   22294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 20:12:27.728401   22294 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 20:12:27.728444   22294 start.go:340] cluster config:
	{Name:addons-899843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-899843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:12:27.728560   22294 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:12:27.731168   22294 out.go:177] * Starting "addons-899843" primary control-plane node in "addons-899843" cluster
	I0612 20:12:27.732494   22294 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:12:27.732537   22294 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 20:12:27.732552   22294 cache.go:56] Caching tarball of preloaded images
	I0612 20:12:27.732627   22294 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 20:12:27.732640   22294 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 20:12:27.732929   22294 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/config.json ...
	I0612 20:12:27.732956   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/config.json: {Name:mk0814d0dfa3d865c35e7e0ab42305e7a784a00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:27.733109   22294 start.go:360] acquireMachinesLock for addons-899843: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 20:12:27.733172   22294 start.go:364] duration metric: took 46.572µs to acquireMachinesLock for "addons-899843"
	I0612 20:12:27.733194   22294 start.go:93] Provisioning new machine with config: &{Name:addons-899843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-899843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:12:27.733282   22294 start.go:125] createHost starting for "" (driver="kvm2")
	I0612 20:12:27.735038   22294 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0612 20:12:27.735222   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:12:27.735273   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:12:27.749372   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41263
	I0612 20:12:27.749812   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:12:27.750520   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:12:27.750541   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:12:27.750879   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:12:27.751086   22294 main.go:141] libmachine: (addons-899843) Calling .GetMachineName
	I0612 20:12:27.751263   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:27.751426   22294 start.go:159] libmachine.API.Create for "addons-899843" (driver="kvm2")
	I0612 20:12:27.751454   22294 client.go:168] LocalClient.Create starting
	I0612 20:12:27.751497   22294 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 20:12:27.888279   22294 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 20:12:28.307942   22294 main.go:141] libmachine: Running pre-create checks...
	I0612 20:12:28.307965   22294 main.go:141] libmachine: (addons-899843) Calling .PreCreateCheck
	I0612 20:12:28.308473   22294 main.go:141] libmachine: (addons-899843) Calling .GetConfigRaw
	I0612 20:12:28.308921   22294 main.go:141] libmachine: Creating machine...
	I0612 20:12:28.308936   22294 main.go:141] libmachine: (addons-899843) Calling .Create
	I0612 20:12:28.309103   22294 main.go:141] libmachine: (addons-899843) Creating KVM machine...
	I0612 20:12:28.310432   22294 main.go:141] libmachine: (addons-899843) DBG | found existing default KVM network
	I0612 20:12:28.311157   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:28.310999   22316 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0612 20:12:28.311196   22294 main.go:141] libmachine: (addons-899843) DBG | created network xml: 
	I0612 20:12:28.311209   22294 main.go:141] libmachine: (addons-899843) DBG | <network>
	I0612 20:12:28.311223   22294 main.go:141] libmachine: (addons-899843) DBG |   <name>mk-addons-899843</name>
	I0612 20:12:28.311233   22294 main.go:141] libmachine: (addons-899843) DBG |   <dns enable='no'/>
	I0612 20:12:28.311237   22294 main.go:141] libmachine: (addons-899843) DBG |   
	I0612 20:12:28.311244   22294 main.go:141] libmachine: (addons-899843) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0612 20:12:28.311253   22294 main.go:141] libmachine: (addons-899843) DBG |     <dhcp>
	I0612 20:12:28.311259   22294 main.go:141] libmachine: (addons-899843) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0612 20:12:28.311264   22294 main.go:141] libmachine: (addons-899843) DBG |     </dhcp>
	I0612 20:12:28.311271   22294 main.go:141] libmachine: (addons-899843) DBG |   </ip>
	I0612 20:12:28.311278   22294 main.go:141] libmachine: (addons-899843) DBG |   
	I0612 20:12:28.311290   22294 main.go:141] libmachine: (addons-899843) DBG | </network>
	I0612 20:12:28.311299   22294 main.go:141] libmachine: (addons-899843) DBG | 
	I0612 20:12:28.316800   22294 main.go:141] libmachine: (addons-899843) DBG | trying to create private KVM network mk-addons-899843 192.168.39.0/24...
	I0612 20:12:28.377005   22294 main.go:141] libmachine: (addons-899843) DBG | private KVM network mk-addons-899843 192.168.39.0/24 created
	I0612 20:12:28.377034   22294 main.go:141] libmachine: (addons-899843) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843 ...
	I0612 20:12:28.377063   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:28.376982   22316 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:12:28.377086   22294 main.go:141] libmachine: (addons-899843) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 20:12:28.377137   22294 main.go:141] libmachine: (addons-899843) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 20:12:28.636099   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:28.635977   22316 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa...
	I0612 20:12:28.782667   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:28.782536   22316 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/addons-899843.rawdisk...
	I0612 20:12:28.782695   22294 main.go:141] libmachine: (addons-899843) DBG | Writing magic tar header
	I0612 20:12:28.782707   22294 main.go:141] libmachine: (addons-899843) DBG | Writing SSH key tar header
	I0612 20:12:28.782715   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:28.782645   22316 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843 ...
	I0612 20:12:28.782726   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843
	I0612 20:12:28.782797   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843 (perms=drwx------)
	I0612 20:12:28.782823   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 20:12:28.782831   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 20:12:28.782846   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 20:12:28.782857   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 20:12:28.782872   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 20:12:28.782881   22294 main.go:141] libmachine: (addons-899843) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 20:12:28.782894   22294 main.go:141] libmachine: (addons-899843) Creating domain...
	I0612 20:12:28.782902   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:12:28.782914   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 20:12:28.782936   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 20:12:28.782959   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home/jenkins
	I0612 20:12:28.782974   22294 main.go:141] libmachine: (addons-899843) DBG | Checking permissions on dir: /home
	I0612 20:12:28.782989   22294 main.go:141] libmachine: (addons-899843) DBG | Skipping /home - not owner
	I0612 20:12:28.783882   22294 main.go:141] libmachine: (addons-899843) define libvirt domain using xml: 
	I0612 20:12:28.783909   22294 main.go:141] libmachine: (addons-899843) <domain type='kvm'>
	I0612 20:12:28.783916   22294 main.go:141] libmachine: (addons-899843)   <name>addons-899843</name>
	I0612 20:12:28.783925   22294 main.go:141] libmachine: (addons-899843)   <memory unit='MiB'>4000</memory>
	I0612 20:12:28.783948   22294 main.go:141] libmachine: (addons-899843)   <vcpu>2</vcpu>
	I0612 20:12:28.783966   22294 main.go:141] libmachine: (addons-899843)   <features>
	I0612 20:12:28.783993   22294 main.go:141] libmachine: (addons-899843)     <acpi/>
	I0612 20:12:28.784011   22294 main.go:141] libmachine: (addons-899843)     <apic/>
	I0612 20:12:28.784018   22294 main.go:141] libmachine: (addons-899843)     <pae/>
	I0612 20:12:28.784026   22294 main.go:141] libmachine: (addons-899843)     
	I0612 20:12:28.784031   22294 main.go:141] libmachine: (addons-899843)   </features>
	I0612 20:12:28.784036   22294 main.go:141] libmachine: (addons-899843)   <cpu mode='host-passthrough'>
	I0612 20:12:28.784041   22294 main.go:141] libmachine: (addons-899843)   
	I0612 20:12:28.784050   22294 main.go:141] libmachine: (addons-899843)   </cpu>
	I0612 20:12:28.784058   22294 main.go:141] libmachine: (addons-899843)   <os>
	I0612 20:12:28.784064   22294 main.go:141] libmachine: (addons-899843)     <type>hvm</type>
	I0612 20:12:28.784070   22294 main.go:141] libmachine: (addons-899843)     <boot dev='cdrom'/>
	I0612 20:12:28.784074   22294 main.go:141] libmachine: (addons-899843)     <boot dev='hd'/>
	I0612 20:12:28.784080   22294 main.go:141] libmachine: (addons-899843)     <bootmenu enable='no'/>
	I0612 20:12:28.784087   22294 main.go:141] libmachine: (addons-899843)   </os>
	I0612 20:12:28.784092   22294 main.go:141] libmachine: (addons-899843)   <devices>
	I0612 20:12:28.784099   22294 main.go:141] libmachine: (addons-899843)     <disk type='file' device='cdrom'>
	I0612 20:12:28.784113   22294 main.go:141] libmachine: (addons-899843)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/boot2docker.iso'/>
	I0612 20:12:28.784127   22294 main.go:141] libmachine: (addons-899843)       <target dev='hdc' bus='scsi'/>
	I0612 20:12:28.784143   22294 main.go:141] libmachine: (addons-899843)       <readonly/>
	I0612 20:12:28.784159   22294 main.go:141] libmachine: (addons-899843)     </disk>
	I0612 20:12:28.784172   22294 main.go:141] libmachine: (addons-899843)     <disk type='file' device='disk'>
	I0612 20:12:28.784181   22294 main.go:141] libmachine: (addons-899843)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 20:12:28.784190   22294 main.go:141] libmachine: (addons-899843)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/addons-899843.rawdisk'/>
	I0612 20:12:28.784198   22294 main.go:141] libmachine: (addons-899843)       <target dev='hda' bus='virtio'/>
	I0612 20:12:28.784203   22294 main.go:141] libmachine: (addons-899843)     </disk>
	I0612 20:12:28.784210   22294 main.go:141] libmachine: (addons-899843)     <interface type='network'>
	I0612 20:12:28.784219   22294 main.go:141] libmachine: (addons-899843)       <source network='mk-addons-899843'/>
	I0612 20:12:28.784234   22294 main.go:141] libmachine: (addons-899843)       <model type='virtio'/>
	I0612 20:12:28.784246   22294 main.go:141] libmachine: (addons-899843)     </interface>
	I0612 20:12:28.784254   22294 main.go:141] libmachine: (addons-899843)     <interface type='network'>
	I0612 20:12:28.784266   22294 main.go:141] libmachine: (addons-899843)       <source network='default'/>
	I0612 20:12:28.784275   22294 main.go:141] libmachine: (addons-899843)       <model type='virtio'/>
	I0612 20:12:28.784294   22294 main.go:141] libmachine: (addons-899843)     </interface>
	I0612 20:12:28.784307   22294 main.go:141] libmachine: (addons-899843)     <serial type='pty'>
	I0612 20:12:28.784326   22294 main.go:141] libmachine: (addons-899843)       <target port='0'/>
	I0612 20:12:28.784335   22294 main.go:141] libmachine: (addons-899843)     </serial>
	I0612 20:12:28.784346   22294 main.go:141] libmachine: (addons-899843)     <console type='pty'>
	I0612 20:12:28.784360   22294 main.go:141] libmachine: (addons-899843)       <target type='serial' port='0'/>
	I0612 20:12:28.784374   22294 main.go:141] libmachine: (addons-899843)     </console>
	I0612 20:12:28.784382   22294 main.go:141] libmachine: (addons-899843)     <rng model='virtio'>
	I0612 20:12:28.784389   22294 main.go:141] libmachine: (addons-899843)       <backend model='random'>/dev/random</backend>
	I0612 20:12:28.784398   22294 main.go:141] libmachine: (addons-899843)     </rng>
	I0612 20:12:28.784403   22294 main.go:141] libmachine: (addons-899843)     
	I0612 20:12:28.784407   22294 main.go:141] libmachine: (addons-899843)     
	I0612 20:12:28.784412   22294 main.go:141] libmachine: (addons-899843)   </devices>
	I0612 20:12:28.784418   22294 main.go:141] libmachine: (addons-899843) </domain>
	I0612 20:12:28.784425   22294 main.go:141] libmachine: (addons-899843) 
	I0612 20:12:28.790312   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:d6:2b:fa in network default
	I0612 20:12:28.790784   22294 main.go:141] libmachine: (addons-899843) Ensuring networks are active...
	I0612 20:12:28.790807   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:28.791382   22294 main.go:141] libmachine: (addons-899843) Ensuring network default is active
	I0612 20:12:28.791621   22294 main.go:141] libmachine: (addons-899843) Ensuring network mk-addons-899843 is active
	I0612 20:12:28.792048   22294 main.go:141] libmachine: (addons-899843) Getting domain xml...
	I0612 20:12:28.792635   22294 main.go:141] libmachine: (addons-899843) Creating domain...
	I0612 20:12:30.203186   22294 main.go:141] libmachine: (addons-899843) Waiting to get IP...
	I0612 20:12:30.204073   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:30.204483   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:30.204558   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:30.204488   22316 retry.go:31] will retry after 220.702949ms: waiting for machine to come up
	I0612 20:12:30.426917   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:30.427435   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:30.427461   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:30.427387   22316 retry.go:31] will retry after 336.04644ms: waiting for machine to come up
	I0612 20:12:30.765132   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:30.765585   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:30.765615   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:30.765551   22316 retry.go:31] will retry after 306.64442ms: waiting for machine to come up
	I0612 20:12:31.074156   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:31.074613   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:31.074643   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:31.074565   22316 retry.go:31] will retry after 510.553284ms: waiting for machine to come up
	I0612 20:12:31.586364   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:31.586793   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:31.586815   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:31.586749   22316 retry.go:31] will retry after 613.530836ms: waiting for machine to come up
	I0612 20:12:32.201589   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:32.202102   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:32.202126   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:32.202052   22316 retry.go:31] will retry after 574.741292ms: waiting for machine to come up
	I0612 20:12:32.778584   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:32.779073   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:32.779096   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:32.779008   22316 retry.go:31] will retry after 725.270321ms: waiting for machine to come up
	I0612 20:12:33.505767   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:33.506097   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:33.506123   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:33.506041   22316 retry.go:31] will retry after 1.392184112s: waiting for machine to come up
	I0612 20:12:34.900331   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:34.900741   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:34.900770   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:34.900721   22316 retry.go:31] will retry after 1.491312427s: waiting for machine to come up
	I0612 20:12:36.394363   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:36.394776   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:36.394803   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:36.394733   22316 retry.go:31] will retry after 2.066052302s: waiting for machine to come up
	I0612 20:12:38.462083   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:38.462507   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:38.462530   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:38.462454   22316 retry.go:31] will retry after 2.034306402s: waiting for machine to come up
	I0612 20:12:40.499615   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:40.500147   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:40.500171   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:40.500107   22316 retry.go:31] will retry after 2.283056423s: waiting for machine to come up
	I0612 20:12:42.785089   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:42.785491   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:42.785518   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:42.785434   22316 retry.go:31] will retry after 2.756143171s: waiting for machine to come up
	I0612 20:12:45.545347   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:45.545880   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find current IP address of domain addons-899843 in network mk-addons-899843
	I0612 20:12:45.545903   22294 main.go:141] libmachine: (addons-899843) DBG | I0612 20:12:45.545815   22316 retry.go:31] will retry after 4.896758392s: waiting for machine to come up
	I0612 20:12:50.445545   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.446012   22294 main.go:141] libmachine: (addons-899843) Found IP for machine: 192.168.39.248
	I0612 20:12:50.446029   22294 main.go:141] libmachine: (addons-899843) Reserving static IP address...
	I0612 20:12:50.446037   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has current primary IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.446468   22294 main.go:141] libmachine: (addons-899843) DBG | unable to find host DHCP lease matching {name: "addons-899843", mac: "52:54:00:58:9b:d7", ip: "192.168.39.248"} in network mk-addons-899843
	I0612 20:12:50.516055   22294 main.go:141] libmachine: (addons-899843) DBG | Getting to WaitForSSH function...
	I0612 20:12:50.516085   22294 main.go:141] libmachine: (addons-899843) Reserved static IP address: 192.168.39.248
	I0612 20:12:50.516098   22294 main.go:141] libmachine: (addons-899843) Waiting for SSH to be available...
	I0612 20:12:50.518668   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.519236   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:minikube Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:50.519260   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.519374   22294 main.go:141] libmachine: (addons-899843) DBG | Using SSH client type: external
	I0612 20:12:50.519398   22294 main.go:141] libmachine: (addons-899843) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa (-rw-------)
	I0612 20:12:50.519429   22294 main.go:141] libmachine: (addons-899843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 20:12:50.519448   22294 main.go:141] libmachine: (addons-899843) DBG | About to run SSH command:
	I0612 20:12:50.519476   22294 main.go:141] libmachine: (addons-899843) DBG | exit 0
	I0612 20:12:50.651104   22294 main.go:141] libmachine: (addons-899843) DBG | SSH cmd err, output: <nil>: 
	I0612 20:12:50.651316   22294 main.go:141] libmachine: (addons-899843) KVM machine creation complete!
	I0612 20:12:50.651664   22294 main.go:141] libmachine: (addons-899843) Calling .GetConfigRaw
	I0612 20:12:50.652233   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:50.652433   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:50.652613   22294 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 20:12:50.652632   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:12:50.654163   22294 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 20:12:50.654177   22294 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 20:12:50.654193   22294 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 20:12:50.654199   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:50.656495   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.656845   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:50.656872   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.656965   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:50.657193   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.657348   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.657457   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:50.657598   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:50.657812   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:50.657825   22294 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 20:12:50.758727   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:12:50.758762   22294 main.go:141] libmachine: Detecting the provisioner...
	I0612 20:12:50.758771   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:50.761351   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.761732   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:50.761757   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.761950   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:50.762152   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.762272   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.762381   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:50.762643   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:50.762833   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:50.762846   22294 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 20:12:50.864040   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 20:12:50.864109   22294 main.go:141] libmachine: found compatible host: buildroot
	I0612 20:12:50.864117   22294 main.go:141] libmachine: Provisioning with buildroot...
	I0612 20:12:50.864131   22294 main.go:141] libmachine: (addons-899843) Calling .GetMachineName
	I0612 20:12:50.864401   22294 buildroot.go:166] provisioning hostname "addons-899843"
	I0612 20:12:50.864426   22294 main.go:141] libmachine: (addons-899843) Calling .GetMachineName
	I0612 20:12:50.864646   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:50.867751   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.868206   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:50.868230   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.868395   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:50.868589   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.868761   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.868899   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:50.869054   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:50.869215   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:50.869227   22294 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-899843 && echo "addons-899843" | sudo tee /etc/hostname
	I0612 20:12:50.987938   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-899843
	
	I0612 20:12:50.987968   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:50.990308   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.990587   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:50.990607   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:50.990758   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:50.991055   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.991227   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:50.991386   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:50.991566   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:50.991762   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:50.991787   22294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-899843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-899843/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-899843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 20:12:51.103699   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:12:51.103725   22294 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 20:12:51.103759   22294 buildroot.go:174] setting up certificates
	I0612 20:12:51.103770   22294 provision.go:84] configureAuth start
	I0612 20:12:51.103778   22294 main.go:141] libmachine: (addons-899843) Calling .GetMachineName
	I0612 20:12:51.104072   22294 main.go:141] libmachine: (addons-899843) Calling .GetIP
	I0612 20:12:51.106750   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.107071   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.107119   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.107253   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.109229   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.109584   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.109612   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.109733   22294 provision.go:143] copyHostCerts
	I0612 20:12:51.109815   22294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 20:12:51.109938   22294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 20:12:51.109999   22294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 20:12:51.110043   22294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.addons-899843 san=[127.0.0.1 192.168.39.248 addons-899843 localhost minikube]
	I0612 20:12:51.255476   22294 provision.go:177] copyRemoteCerts
	I0612 20:12:51.255529   22294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 20:12:51.255550   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.257967   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.258321   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.258346   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.258512   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.258693   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.258881   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.259034   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:12:51.343093   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 20:12:51.366909   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0612 20:12:51.390260   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 20:12:51.414942   22294 provision.go:87] duration metric: took 311.160243ms to configureAuth
	I0612 20:12:51.414968   22294 buildroot.go:189] setting minikube options for container-runtime
	I0612 20:12:51.415194   22294 config.go:182] Loaded profile config "addons-899843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:12:51.415279   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.417934   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.418359   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.418389   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.418608   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.418805   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.418962   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.419085   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.419248   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:51.419412   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:51.419426   22294 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 20:12:51.692326   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 20:12:51.692357   22294 main.go:141] libmachine: Checking connection to Docker...
	I0612 20:12:51.692370   22294 main.go:141] libmachine: (addons-899843) Calling .GetURL
	I0612 20:12:51.693519   22294 main.go:141] libmachine: (addons-899843) DBG | Using libvirt version 6000000
	I0612 20:12:51.695764   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.696100   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.696126   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.696270   22294 main.go:141] libmachine: Docker is up and running!
	I0612 20:12:51.696289   22294 main.go:141] libmachine: Reticulating splines...
	I0612 20:12:51.696297   22294 client.go:171] duration metric: took 23.944833507s to LocalClient.Create
	I0612 20:12:51.696319   22294 start.go:167] duration metric: took 23.9448957s to libmachine.API.Create "addons-899843"
	I0612 20:12:51.696337   22294 start.go:293] postStartSetup for "addons-899843" (driver="kvm2")
	I0612 20:12:51.696348   22294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 20:12:51.696363   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:51.696580   22294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 20:12:51.696603   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.698554   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.698898   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.698922   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.699050   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.699243   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.699407   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.699537   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:12:51.782216   22294 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 20:12:51.786528   22294 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 20:12:51.786550   22294 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 20:12:51.786640   22294 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 20:12:51.786678   22294 start.go:296] duration metric: took 90.333793ms for postStartSetup
	I0612 20:12:51.786716   22294 main.go:141] libmachine: (addons-899843) Calling .GetConfigRaw
	I0612 20:12:51.787304   22294 main.go:141] libmachine: (addons-899843) Calling .GetIP
	I0612 20:12:51.789850   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.790157   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.790187   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.790380   22294 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/config.json ...
	I0612 20:12:51.790542   22294 start.go:128] duration metric: took 24.057249977s to createHost
	I0612 20:12:51.790563   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.792358   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.792654   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.792678   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.792826   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.793036   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.793186   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.793358   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.793548   22294 main.go:141] libmachine: Using SSH client type: native
	I0612 20:12:51.793734   22294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0612 20:12:51.793745   22294 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 20:12:51.896378   22294 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718223171.863565953
	
	I0612 20:12:51.896403   22294 fix.go:216] guest clock: 1718223171.863565953
	I0612 20:12:51.896416   22294 fix.go:229] Guest: 2024-06-12 20:12:51.863565953 +0000 UTC Remote: 2024-06-12 20:12:51.790553747 +0000 UTC m=+24.159727019 (delta=73.012206ms)
	I0612 20:12:51.896443   22294 fix.go:200] guest clock delta is within tolerance: 73.012206ms
	I0612 20:12:51.896450   22294 start.go:83] releasing machines lock for "addons-899843", held for 24.163267679s
	I0612 20:12:51.896476   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:51.896733   22294 main.go:141] libmachine: (addons-899843) Calling .GetIP
	I0612 20:12:51.899507   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.899923   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.899954   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.900123   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:51.900722   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:51.900910   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:12:51.901063   22294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 20:12:51.901132   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.901155   22294 ssh_runner.go:195] Run: cat /version.json
	I0612 20:12:51.901181   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:12:51.903467   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.903820   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.903852   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.903881   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.903948   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.904116   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.904261   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:51.904275   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.904282   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:51.904407   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:12:51.904600   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:12:51.904594   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:12:51.904759   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:12:51.904900   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:12:51.980538   22294 ssh_runner.go:195] Run: systemctl --version
	I0612 20:12:52.005563   22294 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 20:12:52.166193   22294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 20:12:52.175084   22294 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 20:12:52.175159   22294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 20:12:52.193412   22294 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 20:12:52.193433   22294 start.go:494] detecting cgroup driver to use...
	I0612 20:12:52.193496   22294 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 20:12:52.211891   22294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 20:12:52.226647   22294 docker.go:217] disabling cri-docker service (if available) ...
	I0612 20:12:52.226711   22294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 20:12:52.240593   22294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 20:12:52.254096   22294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 20:12:52.367130   22294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 20:12:52.508630   22294 docker.go:233] disabling docker service ...
	I0612 20:12:52.508702   22294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 20:12:52.523339   22294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 20:12:52.536917   22294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 20:12:52.680583   22294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 20:12:52.799487   22294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 20:12:52.813911   22294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 20:12:52.833795   22294 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 20:12:52.833864   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.844958   22294 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 20:12:52.845039   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.856226   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.867258   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.878052   22294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 20:12:52.889287   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.899601   22294 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.916357   22294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:12:52.926649   22294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 20:12:52.935722   22294 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 20:12:52.935786   22294 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 20:12:52.956751   22294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 20:12:52.967943   22294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:12:53.084220   22294 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 20:12:53.215347   22294 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 20:12:53.215425   22294 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 20:12:53.219998   22294 start.go:562] Will wait 60s for crictl version
	I0612 20:12:53.220048   22294 ssh_runner.go:195] Run: which crictl
	I0612 20:12:53.223692   22294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 20:12:53.264809   22294 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 20:12:53.264914   22294 ssh_runner.go:195] Run: crio --version
	I0612 20:12:53.296197   22294 ssh_runner.go:195] Run: crio --version
	I0612 20:12:53.328882   22294 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 20:12:53.330420   22294 main.go:141] libmachine: (addons-899843) Calling .GetIP
	I0612 20:12:53.332932   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:53.333195   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:12:53.333215   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:12:53.333485   22294 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 20:12:53.337798   22294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:12:53.352215   22294 kubeadm.go:877] updating cluster {Name:addons-899843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-899843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 20:12:53.352303   22294 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:12:53.352351   22294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:12:53.383895   22294 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 20:12:53.383947   22294 ssh_runner.go:195] Run: which lz4
	I0612 20:12:53.387860   22294 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 20:12:53.392318   22294 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 20:12:53.392352   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 20:12:54.738861   22294 crio.go:462] duration metric: took 1.351034202s to copy over tarball
	I0612 20:12:54.738937   22294 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 20:12:56.990348   22294 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.251379217s)
	I0612 20:12:56.990379   22294 crio.go:469] duration metric: took 2.251483754s to extract the tarball
	I0612 20:12:56.990387   22294 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 20:12:57.028082   22294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:12:57.071111   22294 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 20:12:57.071133   22294 cache_images.go:84] Images are preloaded, skipping loading
	I0612 20:12:57.071140   22294 kubeadm.go:928] updating node { 192.168.39.248 8443 v1.30.1 crio true true} ...
	I0612 20:12:57.071263   22294 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-899843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-899843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 20:12:57.071346   22294 ssh_runner.go:195] Run: crio config
	I0612 20:12:57.118176   22294 cni.go:84] Creating CNI manager for ""
	I0612 20:12:57.118198   22294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 20:12:57.118206   22294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 20:12:57.118230   22294 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.248 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-899843 NodeName:addons-899843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 20:12:57.118381   22294 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-899843"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 20:12:57.118497   22294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 20:12:57.129959   22294 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 20:12:57.130023   22294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 20:12:57.140846   22294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0612 20:12:57.157983   22294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 20:12:57.174539   22294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0612 20:12:57.191073   22294 ssh_runner.go:195] Run: grep 192.168.39.248	control-plane.minikube.internal$ /etc/hosts
	I0612 20:12:57.194915   22294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:12:57.208228   22294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:12:57.343213   22294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:12:57.362130   22294 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843 for IP: 192.168.39.248
	I0612 20:12:57.362166   22294 certs.go:194] generating shared ca certs ...
	I0612 20:12:57.362192   22294 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.362366   22294 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 20:12:57.669661   22294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt ...
	I0612 20:12:57.669688   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt: {Name:mkd1af81bf97f1c0885dd57c35a317726bd3e69a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.669855   22294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key ...
	I0612 20:12:57.669869   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key: {Name:mka2d81b38abf69ca1705fcee8bcf4cdf7c55924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.669979   22294 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 20:12:57.785832   22294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt ...
	I0612 20:12:57.785859   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt: {Name:mk6e97d71149d268999fee6d2feb14575dee2d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.786051   22294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key ...
	I0612 20:12:57.786065   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key: {Name:mkcd801bf7fc5f2fae41be0bda174154814a3e88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.786164   22294 certs.go:256] generating profile certs ...
	I0612 20:12:57.786220   22294 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.key
	I0612 20:12:57.786233   22294 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt with IP's: []
	I0612 20:12:57.997556   22294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt ...
	I0612 20:12:57.997585   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: {Name:mk4ba79d69ef12d7b904cd7b47ee6e16bfd1f7cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.997769   22294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.key ...
	I0612 20:12:57.997783   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.key: {Name:mk2e6da6374b0c00451e88928371fd21bdb19d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:57.997876   22294 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key.e101c7ef
	I0612 20:12:57.997896   22294 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt.e101c7ef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.248]
	I0612 20:12:58.060260   22294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt.e101c7ef ...
	I0612 20:12:58.060286   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt.e101c7ef: {Name:mk1b60ef7b26a48410dbad630333449f4eecbb22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:58.060451   22294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key.e101c7ef ...
	I0612 20:12:58.060465   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key.e101c7ef: {Name:mkaa467e4110b4f3f44dea8e097d45db83fece80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:58.060555   22294 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt.e101c7ef -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt
	I0612 20:12:58.060626   22294 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key.e101c7ef -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key
	I0612 20:12:58.060674   22294 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.key
	I0612 20:12:58.060690   22294 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.crt with IP's: []
	I0612 20:12:58.163482   22294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.crt ...
	I0612 20:12:58.163509   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.crt: {Name:mk55f789d5f0a08841bd1cf3c48a5bbb02e1b769 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:58.163685   22294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.key ...
	I0612 20:12:58.163698   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.key: {Name:mkd8c4c104647278f13822fac0e7b4f1aec25fd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:12:58.163890   22294 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 20:12:58.163928   22294 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 20:12:58.163951   22294 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 20:12:58.163974   22294 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 20:12:58.164538   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 20:12:58.205900   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 20:12:58.254025   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 20:12:58.278333   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 20:12:58.302618   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0612 20:12:58.326804   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 20:12:58.349935   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 20:12:58.374082   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0612 20:12:58.397196   22294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 20:12:58.420771   22294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 20:12:58.437594   22294 ssh_runner.go:195] Run: openssl version
	I0612 20:12:58.443733   22294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 20:12:58.455367   22294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:12:58.460309   22294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:12:58.460367   22294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:12:58.466786   22294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 20:12:58.478651   22294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 20:12:58.483154   22294 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 20:12:58.483223   22294 kubeadm.go:391] StartCluster: {Name:addons-899843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-899843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:12:58.483295   22294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 20:12:58.483334   22294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 20:12:58.527011   22294 cri.go:89] found id: ""
	I0612 20:12:58.527087   22294 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 20:12:58.538058   22294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 20:12:58.548594   22294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 20:12:58.559198   22294 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 20:12:58.559222   22294 kubeadm.go:156] found existing configuration files:
	
	I0612 20:12:58.559272   22294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 20:12:58.569359   22294 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 20:12:58.569413   22294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 20:12:58.579573   22294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 20:12:58.589537   22294 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 20:12:58.589591   22294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 20:12:58.599451   22294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 20:12:58.608577   22294 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 20:12:58.608627   22294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 20:12:58.618359   22294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 20:12:58.627643   22294 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 20:12:58.627686   22294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 20:12:58.637024   22294 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 20:12:58.706936   22294 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 20:12:58.707002   22294 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 20:12:58.826862   22294 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 20:12:58.827025   22294 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 20:12:58.827183   22294 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 20:12:59.058094   22294 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 20:12:59.275457   22294 out.go:204]   - Generating certificates and keys ...
	I0612 20:12:59.275565   22294 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 20:12:59.275662   22294 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 20:12:59.329967   22294 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 20:12:59.555367   22294 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 20:12:59.797115   22294 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 20:12:59.866831   22294 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 20:13:00.090142   22294 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 20:13:00.090271   22294 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-899843 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	I0612 20:13:00.383818   22294 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 20:13:00.384041   22294 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-899843 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	I0612 20:13:00.527310   22294 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 20:13:00.710065   22294 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 20:13:00.945971   22294 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 20:13:00.946093   22294 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 20:13:01.076251   22294 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 20:13:01.433324   22294 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 20:13:01.628583   22294 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 20:13:01.961599   22294 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 20:13:02.231153   22294 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 20:13:02.231662   22294 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 20:13:02.235487   22294 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 20:13:02.237388   22294 out.go:204]   - Booting up control plane ...
	I0612 20:13:02.237492   22294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 20:13:02.237604   22294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 20:13:02.237696   22294 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 20:13:02.252487   22294 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 20:13:02.253468   22294 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 20:13:02.253549   22294 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 20:13:02.389148   22294 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 20:13:02.389289   22294 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 20:13:03.389831   22294 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001341582s
	I0612 20:13:03.389930   22294 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 20:13:08.388793   22294 kubeadm.go:309] [api-check] The API server is healthy after 5.001153602s
	I0612 20:13:08.406642   22294 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 20:13:08.422882   22294 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 20:13:08.457258   22294 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 20:13:08.457563   22294 kubeadm.go:309] [mark-control-plane] Marking the node addons-899843 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 20:13:08.471343   22294 kubeadm.go:309] [bootstrap-token] Using token: ix88o6.5ao8ybr6u6nckbj4
	I0612 20:13:08.472713   22294 out.go:204]   - Configuring RBAC rules ...
	I0612 20:13:08.472932   22294 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 20:13:08.477906   22294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 20:13:08.490328   22294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 20:13:08.493664   22294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 20:13:08.497300   22294 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 20:13:08.501087   22294 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 20:13:08.799470   22294 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 20:13:09.233690   22294 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 20:13:09.807309   22294 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 20:13:09.808272   22294 kubeadm.go:309] 
	I0612 20:13:09.808340   22294 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 20:13:09.808351   22294 kubeadm.go:309] 
	I0612 20:13:09.808438   22294 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 20:13:09.808463   22294 kubeadm.go:309] 
	I0612 20:13:09.808521   22294 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 20:13:09.808604   22294 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 20:13:09.808689   22294 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 20:13:09.808699   22294 kubeadm.go:309] 
	I0612 20:13:09.808778   22294 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 20:13:09.808788   22294 kubeadm.go:309] 
	I0612 20:13:09.808853   22294 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 20:13:09.808862   22294 kubeadm.go:309] 
	I0612 20:13:09.808932   22294 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 20:13:09.809028   22294 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 20:13:09.809176   22294 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 20:13:09.809194   22294 kubeadm.go:309] 
	I0612 20:13:09.809301   22294 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 20:13:09.809401   22294 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 20:13:09.809419   22294 kubeadm.go:309] 
	I0612 20:13:09.809524   22294 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ix88o6.5ao8ybr6u6nckbj4 \
	I0612 20:13:09.809663   22294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 20:13:09.809704   22294 kubeadm.go:309] 	--control-plane 
	I0612 20:13:09.809714   22294 kubeadm.go:309] 
	I0612 20:13:09.809896   22294 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 20:13:09.809914   22294 kubeadm.go:309] 
	I0612 20:13:09.810013   22294 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ix88o6.5ao8ybr6u6nckbj4 \
	I0612 20:13:09.810144   22294 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 20:13:09.810379   22294 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 20:13:09.810410   22294 cni.go:84] Creating CNI manager for ""
	I0612 20:13:09.810419   22294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 20:13:09.812440   22294 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 20:13:09.813908   22294 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 20:13:09.827476   22294 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 20:13:09.855803   22294 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 20:13:09.855927   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-899843 minikube.k8s.io/updated_at=2024_06_12T20_13_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=addons-899843 minikube.k8s.io/primary=true
	I0612 20:13:09.855931   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:09.983410   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:10.017604   22294 ops.go:34] apiserver oom_adj: -16
	I0612 20:13:10.483963   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:10.984405   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:11.484414   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:11.983809   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:12.483844   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:12.983944   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:13.484138   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:13.984279   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:14.483512   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:14.983514   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:15.483549   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:15.984310   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:16.484133   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:16.984066   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:17.483582   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:17.983873   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:18.483584   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:18.983588   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:19.484002   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:19.984354   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:20.484367   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:20.983585   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:21.483828   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:21.983793   22294 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:13:22.067919   22294 kubeadm.go:1107] duration metric: took 12.212071248s to wait for elevateKubeSystemPrivileges
	W0612 20:13:22.067957   22294 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 20:13:22.067967   22294 kubeadm.go:393] duration metric: took 23.584748426s to StartCluster
	I0612 20:13:22.067988   22294 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:13:22.068115   22294 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:13:22.068462   22294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:13:22.068645   22294 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0612 20:13:22.068662   22294 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:13:22.070636   22294 out.go:177] * Verifying Kubernetes components...
	I0612 20:13:22.068721   22294 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0612 20:13:22.068846   22294 config.go:182] Loaded profile config "addons-899843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:13:22.071939   22294 addons.go:69] Setting yakd=true in profile "addons-899843"
	I0612 20:13:22.071945   22294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:13:22.071955   22294 addons.go:69] Setting cloud-spanner=true in profile "addons-899843"
	I0612 20:13:22.071972   22294 addons.go:234] Setting addon yakd=true in "addons-899843"
	I0612 20:13:22.071977   22294 addons.go:69] Setting gcp-auth=true in profile "addons-899843"
	I0612 20:13:22.072022   22294 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-899843"
	I0612 20:13:22.072051   22294 mustload.go:65] Loading cluster: addons-899843
	I0612 20:13:22.072084   22294 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-899843"
	I0612 20:13:22.072118   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.071969   22294 addons.go:69] Setting registry=true in profile "addons-899843"
	I0612 20:13:22.072171   22294 addons.go:234] Setting addon registry=true in "addons-899843"
	I0612 20:13:22.072197   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.071983   22294 addons.go:234] Setting addon cloud-spanner=true in "addons-899843"
	I0612 20:13:22.072244   22294 config.go:182] Loaded profile config "addons-899843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:13:22.072267   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.071990   22294 addons.go:69] Setting helm-tiller=true in profile "addons-899843"
	I0612 20:13:22.072354   22294 addons.go:234] Setting addon helm-tiller=true in "addons-899843"
	I0612 20:13:22.072388   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.071990   22294 addons.go:69] Setting inspektor-gadget=true in profile "addons-899843"
	I0612 20:13:22.072443   22294 addons.go:234] Setting addon inspektor-gadget=true in "addons-899843"
	I0612 20:13:22.072471   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.071996   22294 addons.go:69] Setting metrics-server=true in profile "addons-899843"
	I0612 20:13:22.072530   22294 addons.go:234] Setting addon metrics-server=true in "addons-899843"
	I0612 20:13:22.072561   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.072568   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.072576   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.072583   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.071946   22294 addons.go:69] Setting ingress-dns=true in profile "addons-899843"
	I0612 20:13:22.072598   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072600   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072614   22294 addons.go:234] Setting addon ingress-dns=true in "addons-899843"
	I0612 20:13:22.072639   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.072645   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.072665   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072771   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.071995   22294 addons.go:69] Setting ingress=true in profile "addons-899843"
	I0612 20:13:22.072801   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072800   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.072813   22294 addons.go:234] Setting addon ingress=true in "addons-899843"
	I0612 20:13:22.072821   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072833   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072000   22294 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-899843"
	I0612 20:13:22.072858   22294 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-899843"
	I0612 20:13:22.072002   22294 addons.go:69] Setting default-storageclass=true in profile "addons-899843"
	I0612 20:13:22.072007   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.072888   22294 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-899843"
	I0612 20:13:22.072005   22294 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-899843"
	I0612 20:13:22.072913   22294 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-899843"
	I0612 20:13:22.072009   22294 addons.go:69] Setting storage-provisioner=true in profile "addons-899843"
	I0612 20:13:22.072934   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.072953   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072951   22294 addons.go:234] Setting addon storage-provisioner=true in "addons-899843"
	I0612 20:13:22.072012   22294 addons.go:69] Setting volumesnapshots=true in profile "addons-899843"
	I0612 20:13:22.072974   22294 addons.go:234] Setting addon volumesnapshots=true in "addons-899843"
	I0612 20:13:22.073106   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.073213   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073231   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073231   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073249   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073261   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073277   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.072006   22294 addons.go:69] Setting volcano=true in profile "addons-899843"
	I0612 20:13:22.073308   22294 addons.go:234] Setting addon volcano=true in "addons-899843"
	I0612 20:13:22.073377   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.073439   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073467   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073513   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.073520   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073544   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.073550   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073699   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073725   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073838   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073854   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.073870   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.073878   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.074079   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.074445   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.074481   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.092809   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43405
	I0612 20:13:22.093139   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0612 20:13:22.093237   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I0612 20:13:22.093493   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.093899   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.093901   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I0612 20:13:22.094008   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.094092   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.094424   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I0612 20:13:22.094487   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.094502   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.094502   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.094586   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.094859   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.095041   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.095060   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.095094   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.095224   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.095443   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.095492   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.095514   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.095625   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.095639   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.095813   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.095969   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.103632   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.103642   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.103678   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.103682   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.103638   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.103750   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.103942   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.103972   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.104024   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.104052   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.111630   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0612 20:13:22.112271   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.112818   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.112842   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.113167   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.113741   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.113772   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.139974   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36783
	I0612 20:13:22.140197   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0612 20:13:22.140319   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0612 20:13:22.140600   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.140839   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.140944   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.141073   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.141097   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.141607   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.141623   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.141751   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.141763   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.141826   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.142365   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.142402   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.142605   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.142613   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.142802   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.142935   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.144892   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.145096   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I0612 20:13:22.145211   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0612 20:13:22.147424   22294 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0612 20:13:22.145825   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.145876   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.146213   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.146997   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0612 20:13:22.148921   22294 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0612 20:13:22.148938   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0612 20:13:22.148957   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.149953   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.149970   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.149977   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.150034   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0612 20:13:22.151891   22294 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0612 20:13:22.150213   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.150800   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.150833   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.151686   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.153298   22294 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0612 20:13:22.153309   22294 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0612 20:13:22.153328   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.153359   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.153397   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.154682   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.154688   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43173
	I0612 20:13:22.154715   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.154689   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.154742   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.154791   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.154806   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.155403   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.155469   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.155553   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.155774   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.155816   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.156620   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.157216   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.157234   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.157527   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.157541   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.157884   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.157927   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.158085   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.158129   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.158566   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.158785   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.159298   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.159380   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.161578   22294 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0612 20:13:22.161210   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.162068   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.163048   22294 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0612 20:13:22.164658   22294 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0612 20:13:22.164689   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0612 20:13:22.164705   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.162783   22294 addons.go:234] Setting addon default-storageclass=true in "addons-899843"
	I0612 20:13:22.164778   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.165139   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.165173   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.162787   22294 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-899843"
	I0612 20:13:22.166507   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.166864   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.166899   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.168649   22294 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0612 20:13:22.163106   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.163265   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.163931   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39011
	I0612 20:13:22.170087   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.170128   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.171602   22294 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0612 20:13:22.170483   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.170725   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.171455   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.171494   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.173191   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.173310   22294 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0612 20:13:22.173327   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0612 20:13:22.173351   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.173356   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.174613   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.174632   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.174694   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36473
	I0612 20:13:22.174828   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.175109   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.175163   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.175357   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.175600   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.176145   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.176161   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.176616   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.176638   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.177105   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.177324   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.178558   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.178928   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.178945   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.179132   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.179330   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.179498   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.179555   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.179737   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.181965   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0612 20:13:22.181007   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43275
	I0612 20:13:22.181199   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36741
	I0612 20:13:22.181417   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0612 20:13:22.182409   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37051
	I0612 20:13:22.188176   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0612 20:13:22.183767   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.183913   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.184271   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.187211   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I0612 20:13:22.187216   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39637
	I0612 20:13:22.187221   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0612 20:13:22.187640   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.187899   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0612 20:13:22.188943   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0612 20:13:22.191885   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0612 20:13:22.190083   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.190456   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.190522   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.190863   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.190892   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.190964   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.191071   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.191237   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.191350   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.193240   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.193313   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.194736   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0612 20:13:22.193370   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.193413   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.193741   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.193780   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.193950   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.194033   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.194119   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.194199   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.194515   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.196094   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.197590   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0612 20:13:22.196312   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.196325   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.196340   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.196379   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.196773   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.196791   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.196792   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.196839   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.197086   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.200360   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0612 20:13:22.198925   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.199281   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.199291   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.199345   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.199366   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.199659   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.200211   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.200458   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.200475   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:22.201267   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0612 20:13:22.203433   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0612 20:13:22.201721   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.201838   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.202118   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.202138   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.202340   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.202424   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.203307   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.203702   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.204441   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.207233   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0612 20:13:22.205503   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.205813   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40247
	I0612 20:13:22.205841   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.205854   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.205899   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.206381   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.207404   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.208506   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0612 20:13:22.209729   22294 out.go:177]   - Using image docker.io/registry:2.8.3
	I0612 20:13:22.211156   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.211191   22294 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 20:13:22.211255   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0612 20:13:22.211479   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:22.212579   22294 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0612 20:13:22.212590   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:22.215611   22294 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0612 20:13:22.215627   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0612 20:13:22.215643   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.217299   22294 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 20:13:22.217314   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 20:13:22.217329   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.212677   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.213029   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:22.213058   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:22.217410   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:22.217419   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:22.217427   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:22.214573   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.217705   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:22.217733   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:22.217741   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	W0612 20:13:22.217816   22294 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0612 20:13:22.219754   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.220884   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.221362   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.221390   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.221545   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.221594   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.221803   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.222068   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.222162   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.222358   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.223981   22294 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0612 20:13:22.222622   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.222798   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.222955   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.222995   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.223222   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.225588   22294 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 20:13:22.225600   22294 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 20:13:22.225619   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.225929   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.225951   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.226227   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.226281   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.226553   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.226602   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.226754   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.226835   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.229727   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36193
	I0612 20:13:22.230178   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.230696   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.230712   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.230770   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44791
	I0612 20:13:22.230916   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44475
	I0612 20:13:22.231262   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.231353   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.231399   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.231961   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45147
	I0612 20:13:22.231975   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.232002   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.231964   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.232058   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.232058   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.232061   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.232082   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.232456   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.232491   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.232502   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.232555   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.232662   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.232800   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.232854   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.233039   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.241426   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0612 20:13:22.241575   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.241681   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.241742   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.241782   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.241951   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I0612 20:13:22.242116   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.242120   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.242242   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.242254   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.242565   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.242895   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.243153   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:22.243212   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:22.243550   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.243565   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.243681   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.243916   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.244324   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.244392   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.247156   22294 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0612 20:13:22.245585   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.246549   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.246778   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.246920   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.248668   22294 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0612 20:13:22.248682   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.248684   22294 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0612 20:13:22.248757   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.250280   22294 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0612 20:13:22.251808   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0612 20:13:22.251826   22294 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0612 20:13:22.251847   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.250476   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.253569   22294 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0612 20:13:22.250500   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.252517   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.252557   22294 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 20:13:22.252910   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.254645   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.255267   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.255287   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.255319   22294 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0612 20:13:22.255331   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0612 20:13:22.255343   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.255345   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.255361   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.255368   22294 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 20:13:22.255379   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.255217   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.255963   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.256042   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.256084   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.256127   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.256194   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.256235   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.256636   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.258394   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.260383   22294 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0612 20:13:22.259234   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.259824   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.259835   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.260301   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.262157   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.262180   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.262158   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.262199   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.262274   22294 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0612 20:13:22.262285   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0612 20:13:22.262303   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.262419   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.262502   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.262558   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.262666   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.262921   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.263057   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:22.265195   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.265612   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.265634   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.265718   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.265890   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.266026   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.266182   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	W0612 20:13:22.266610   22294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57956->192.168.39.248:22: read: connection reset by peer
	I0612 20:13:22.266635   22294 retry.go:31] will retry after 266.514003ms: ssh: handshake failed: read tcp 192.168.39.1:57956->192.168.39.248:22: read: connection reset by peer
	W0612 20:13:22.267105   22294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57972->192.168.39.248:22: read: connection reset by peer
	I0612 20:13:22.267120   22294 retry.go:31] will retry after 197.996218ms: ssh: handshake failed: read tcp 192.168.39.1:57972->192.168.39.248:22: read: connection reset by peer
	I0612 20:13:22.273839   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I0612 20:13:22.274192   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:22.274690   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:22.274710   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:22.275048   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:22.275277   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:22.276907   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:22.278886   22294 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0612 20:13:22.280507   22294 out.go:177]   - Using image docker.io/busybox:stable
	I0612 20:13:22.282013   22294 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0612 20:13:22.282035   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0612 20:13:22.282056   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:22.284749   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.285141   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:22.285166   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:22.285299   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:22.285478   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:22.285614   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:22.285792   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	W0612 20:13:22.287866   22294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57980->192.168.39.248:22: read: connection reset by peer
	I0612 20:13:22.287887   22294 retry.go:31] will retry after 212.825352ms: ssh: handshake failed: read tcp 192.168.39.1:57980->192.168.39.248:22: read: connection reset by peer
	I0612 20:13:22.504444   22294 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0612 20:13:22.504467   22294 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0612 20:13:22.580185   22294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:13:22.580223   22294 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0612 20:13:22.599270   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0612 20:13:22.642311   22294 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0612 20:13:22.642340   22294 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0612 20:13:22.668236   22294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 20:13:22.668267   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0612 20:13:22.718448   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0612 20:13:22.735200   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0612 20:13:22.735225   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0612 20:13:22.834509   22294 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0612 20:13:22.834537   22294 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0612 20:13:22.866635   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 20:13:22.871072   22294 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0612 20:13:22.871098   22294 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0612 20:13:22.911158   22294 node_ready.go:35] waiting up to 6m0s for node "addons-899843" to be "Ready" ...
	I0612 20:13:22.914514   22294 node_ready.go:49] node "addons-899843" has status "Ready":"True"
	I0612 20:13:22.914537   22294 node_ready.go:38] duration metric: took 3.330668ms for node "addons-899843" to be "Ready" ...
	I0612 20:13:22.914546   22294 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:13:22.921162   22294 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vcczk" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:22.950633   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0612 20:13:22.951587   22294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 20:13:22.951612   22294 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 20:13:22.955366   22294 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0612 20:13:22.955388   22294 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0612 20:13:22.999792   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0612 20:13:23.002883   22294 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0612 20:13:23.002902   22294 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0612 20:13:23.009468   22294 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0612 20:13:23.009492   22294 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0612 20:13:23.099146   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0612 20:13:23.099179   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0612 20:13:23.143821   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0612 20:13:23.149958   22294 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0612 20:13:23.149981   22294 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0612 20:13:23.230052   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 20:13:23.293946   22294 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0612 20:13:23.293976   22294 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0612 20:13:23.297354   22294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 20:13:23.297379   22294 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 20:13:23.321358   22294 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0612 20:13:23.321379   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0612 20:13:23.328583   22294 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0612 20:13:23.328597   22294 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0612 20:13:23.334853   22294 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0612 20:13:23.334877   22294 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0612 20:13:23.354171   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0612 20:13:23.464782   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0612 20:13:23.464805   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0612 20:13:23.569896   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0612 20:13:23.569921   22294 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0612 20:13:23.584845   22294 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0612 20:13:23.584873   22294 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0612 20:13:23.598618   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0612 20:13:23.603990   22294 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0612 20:13:23.604021   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0612 20:13:23.609170   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 20:13:23.750175   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0612 20:13:23.750215   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0612 20:13:23.755982   22294 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0612 20:13:23.756012   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0612 20:13:23.777292   22294 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0612 20:13:23.777319   22294 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0612 20:13:23.780935   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0612 20:13:23.923604   22294 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0612 20:13:23.923631   22294 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0612 20:13:24.107403   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0612 20:13:24.175358   22294 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0612 20:13:24.175378   22294 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0612 20:13:24.316376   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0612 20:13:24.316398   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0612 20:13:24.493895   22294 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0612 20:13:24.493919   22294 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0612 20:13:24.627481   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0612 20:13:24.627508   22294 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0612 20:13:24.861133   22294 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0612 20:13:24.861158   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0612 20:13:24.927586   22294 pod_ready.go:102] pod "coredns-7db6d8ff4d-vcczk" in "kube-system" namespace has status "Ready":"False"
	I0612 20:13:25.039325   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0612 20:13:25.039356   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0612 20:13:25.200943   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0612 20:13:25.221045   22294 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.640784377s)
	I0612 20:13:25.221088   22294 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0612 20:13:25.679128   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0612 20:13:25.679154   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0612 20:13:25.728061   22294 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-899843" context rescaled to 1 replicas
	I0612 20:13:26.026057   22294 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0612 20:13:26.026086   22294 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0612 20:13:26.484352   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0612 20:13:27.013000   22294 pod_ready.go:102] pod "coredns-7db6d8ff4d-vcczk" in "kube-system" namespace has status "Ready":"False"
	I0612 20:13:27.973811   22294 pod_ready.go:92] pod "coredns-7db6d8ff4d-vcczk" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:27.973848   22294 pod_ready.go:81] duration metric: took 5.052657039s for pod "coredns-7db6d8ff4d-vcczk" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:27.973862   22294 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-whsws" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.098852   22294 pod_ready.go:92] pod "coredns-7db6d8ff4d-whsws" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.098877   22294 pod_ready.go:81] duration metric: took 125.007716ms for pod "coredns-7db6d8ff4d-whsws" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.098888   22294 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.215226   22294 pod_ready.go:92] pod "etcd-addons-899843" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.215263   22294 pod_ready.go:81] duration metric: took 116.367988ms for pod "etcd-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.215277   22294 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.281296   22294 pod_ready.go:92] pod "kube-apiserver-addons-899843" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.281327   22294 pod_ready.go:81] duration metric: took 66.04148ms for pod "kube-apiserver-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.281340   22294 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.322865   22294 pod_ready.go:92] pod "kube-controller-manager-addons-899843" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.322899   22294 pod_ready.go:81] duration metric: took 41.550583ms for pod "kube-controller-manager-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.322913   22294 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rbbmx" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.361318   22294 pod_ready.go:92] pod "kube-proxy-rbbmx" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.361341   22294 pod_ready.go:81] duration metric: took 38.421415ms for pod "kube-proxy-rbbmx" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.361350   22294 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.743251   22294 pod_ready.go:92] pod "kube-scheduler-addons-899843" in "kube-system" namespace has status "Ready":"True"
	I0612 20:13:28.743287   22294 pod_ready.go:81] duration metric: took 381.916619ms for pod "kube-scheduler-addons-899843" in "kube-system" namespace to be "Ready" ...
	I0612 20:13:28.743298   22294 pod_ready.go:38] duration metric: took 5.828741017s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:13:28.743315   22294 api_server.go:52] waiting for apiserver process to appear ...
	I0612 20:13:28.743371   22294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:13:29.338832   22294 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0612 20:13:29.338872   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:29.342245   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:29.342737   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:29.342769   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:29.342979   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:29.343225   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:29.343395   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:29.343527   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:30.139855   22294 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0612 20:13:30.291694   22294 addons.go:234] Setting addon gcp-auth=true in "addons-899843"
	I0612 20:13:30.291755   22294 host.go:66] Checking if "addons-899843" exists ...
	I0612 20:13:30.292188   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:30.292231   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:30.308275   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I0612 20:13:30.308808   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:30.309324   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:30.309350   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:30.309659   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:30.310249   22294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:13:30.310301   22294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:13:30.327660   22294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0612 20:13:30.328099   22294 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:13:30.328601   22294 main.go:141] libmachine: Using API Version  1
	I0612 20:13:30.328616   22294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:13:30.328980   22294 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:13:30.329219   22294 main.go:141] libmachine: (addons-899843) Calling .GetState
	I0612 20:13:30.331231   22294 main.go:141] libmachine: (addons-899843) Calling .DriverName
	I0612 20:13:30.331456   22294 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0612 20:13:30.331490   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHHostname
	I0612 20:13:30.334646   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:30.335276   22294 main.go:141] libmachine: (addons-899843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:9b:d7", ip: ""} in network mk-addons-899843: {Iface:virbr1 ExpiryTime:2024-06-12 21:12:42 +0000 UTC Type:0 Mac:52:54:00:58:9b:d7 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:addons-899843 Clientid:01:52:54:00:58:9b:d7}
	I0612 20:13:30.335305   22294 main.go:141] libmachine: (addons-899843) DBG | domain addons-899843 has defined IP address 192.168.39.248 and MAC address 52:54:00:58:9b:d7 in network mk-addons-899843
	I0612 20:13:30.335526   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHPort
	I0612 20:13:30.335729   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHKeyPath
	I0612 20:13:30.335905   22294 main.go:141] libmachine: (addons-899843) Calling .GetSSHUsername
	I0612 20:13:30.336053   22294 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/addons-899843/id_rsa Username:docker}
	I0612 20:13:31.426367   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.827060925s)
	I0612 20:13:31.426419   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426422   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.707943214s)
	I0612 20:13:31.426461   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426472   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426478   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.559811913s)
	I0612 20:13:31.426496   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426505   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426431   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426566   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.475900289s)
	I0612 20:13:31.426594   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426605   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426630   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.42680937s)
	I0612 20:13:31.426672   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426673   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.282825584s)
	I0612 20:13:31.426682   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426705   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426717   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426783   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.196708296s)
	I0612 20:13:31.426801   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426808   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426891   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.072683305s)
	I0612 20:13:31.426928   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.426910   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.426938   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.426943   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426969   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.828321802s)
	I0612 20:13:31.427027   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.427031   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.427036   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.427041   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.427044   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.427050   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.427095   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.427102   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.427109   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.427116   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.427191   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.817984227s)
	I0612 20:13:31.427220   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.427235   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.426986   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.428605   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.428616   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.428626   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.428638   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.428789   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.428814   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.428821   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.428829   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.428836   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.428922   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.428984   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.429004   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.429010   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.427008   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.429244   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.429254   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.429261   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.429344   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.648365965s)
	I0612 20:13:31.429361   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.429367   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.430024   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.32258049s)
	W0612 20:13:31.430062   22294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0612 20:13:31.430093   22294 retry.go:31] will retry after 153.385095ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0612 20:13:31.430193   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.2292178s)
	I0612 20:13:31.430211   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.430220   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.430284   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430306   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430312   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.430322   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.430328   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.430369   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430386   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430392   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.430401   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430444   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430466   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430472   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.430529   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430547   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430553   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.430560   22294 addons.go:475] Verifying addon ingress=true in "addons-899843"
	I0612 20:13:31.432619   22294 out.go:177] * Verifying ingress addon...
	I0612 20:13:31.430792   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430813   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430954   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430978   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.430987   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.430999   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.431003   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.431017   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.431034   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.431037   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.431055   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.431061   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.431075   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.432090   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.432696   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432698   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432733   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.434204   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.435485   22294 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-899843 service yakd-dashboard -n yakd-dashboard
	
	I0612 20:13:31.432746   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432753   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432760   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432127   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.432764   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.432770   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.434208   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.434217   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.434892   22294 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0612 20:13:31.436874   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.436893   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.436909   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.436926   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.436959   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.436895   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.436972   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.436975   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.436980   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.437398   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.437414   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.437422   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.437429   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.437439   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.437444   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.437445   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.437452   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.437430   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.437587   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.437643   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.437652   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.437660   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.437668   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.437669   22294 addons.go:475] Verifying addon registry=true in "addons-899843"
	I0612 20:13:31.439100   22294 out.go:177] * Verifying registry addon...
	I0612 20:13:31.437809   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.440877   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.440894   22294 addons.go:475] Verifying addon metrics-server=true in "addons-899843"
	I0612 20:13:31.441648   22294 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0612 20:13:31.461798   22294 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0612 20:13:31.461820   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:31.462149   22294 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0612 20:13:31.462171   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:31.480792   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.480815   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.481144   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.481190   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.481198   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	W0612 20:13:31.481307   22294 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0612 20:13:31.484413   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:31.484432   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:31.484708   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:31.484720   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:31.484728   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:31.584187   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0612 20:13:31.942626   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:31.950818   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:32.442737   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:32.445433   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:32.941388   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:32.950965   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:33.479709   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:33.481193   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:33.623003   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.1385913s)
	I0612 20:13:33.623074   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:33.623080   22294 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.87968245s)
	I0612 20:13:33.623112   22294 api_server.go:72] duration metric: took 11.55442479s to wait for apiserver process to appear ...
	I0612 20:13:33.623123   22294 api_server.go:88] waiting for apiserver healthz status ...
	I0612 20:13:33.623147   22294 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I0612 20:13:33.623090   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:33.623112   22294 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.291635112s)
	I0612 20:13:33.624906   22294 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0612 20:13:33.623516   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:33.623592   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:33.626371   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:33.626393   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:33.626403   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:33.627779   22294 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0612 20:13:33.626634   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:33.626669   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:33.629060   22294 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0612 20:13:33.629068   22294 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0612 20:13:33.629100   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:33.629132   22294 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-899843"
	I0612 20:13:33.630648   22294 out.go:177] * Verifying csi-hostpath-driver addon...
	I0612 20:13:33.632948   22294 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0612 20:13:33.660297   22294 api_server.go:279] https://192.168.39.248:8443/healthz returned 200:
	ok
	I0612 20:13:33.671589   22294 api_server.go:141] control plane version: v1.30.1
	I0612 20:13:33.671613   22294 api_server.go:131] duration metric: took 48.483679ms to wait for apiserver health ...
	I0612 20:13:33.671621   22294 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 20:13:33.673170   22294 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0612 20:13:33.673194   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:33.719700   22294 system_pods.go:59] 19 kube-system pods found
	I0612 20:13:33.719745   22294 system_pods.go:61] "coredns-7db6d8ff4d-vcczk" [df3fef56-31ac-482e-a39b-29b00592b53b] Running
	I0612 20:13:33.719753   22294 system_pods.go:61] "coredns-7db6d8ff4d-whsws" [ad628dac-001d-4531-89fd-33629dcc54cb] Running
	I0612 20:13:33.719764   22294 system_pods.go:61] "csi-hostpath-attacher-0" [ba878465-f2b1-4c7e-a56e-791040338b12] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0612 20:13:33.719771   22294 system_pods.go:61] "csi-hostpath-resizer-0" [c1fba905-dff2-4f6b-8226-27d1530fe067] Pending
	I0612 20:13:33.719782   22294 system_pods.go:61] "csi-hostpathplugin-h9np6" [066343ae-5c77-4a5c-b973-ce1972c4816d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0612 20:13:33.719787   22294 system_pods.go:61] "etcd-addons-899843" [6762c9cc-df6c-48de-9bee-553b979bc90e] Running
	I0612 20:13:33.719793   22294 system_pods.go:61] "kube-apiserver-addons-899843" [1b709cc7-14d9-472a-9fd2-14f675696c51] Running
	I0612 20:13:33.719801   22294 system_pods.go:61] "kube-controller-manager-addons-899843" [77707797-5a1b-457f-9628-708c30b7209f] Running
	I0612 20:13:33.719809   22294 system_pods.go:61] "kube-ingress-dns-minikube" [fe4b4575-3547-4019-bc49-d7599aaaedc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0612 20:13:33.719819   22294 system_pods.go:61] "kube-proxy-rbbmx" [07785176-2ce1-4304-992e-8962b08939db] Running
	I0612 20:13:33.719825   22294 system_pods.go:61] "kube-scheduler-addons-899843" [2204b584-b2c5-4c49-924c-17b3552682a1] Running
	I0612 20:13:33.719833   22294 system_pods.go:61] "metrics-server-c59844bb4-g6s5d" [4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 20:13:33.719846   22294 system_pods.go:61] "nvidia-device-plugin-daemonset-7t2hk" [318904a0-3329-4548-9694-082dce3d63ff] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0612 20:13:33.719860   22294 system_pods.go:61] "registry-d4wfp" [4dedad66-548d-4156-a741-4077e86eb02b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0612 20:13:33.719874   22294 system_pods.go:61] "registry-proxy-l4fcl" [947cca02-a2df-4d5e-b84a-0cb7bb05d876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0612 20:13:33.719886   22294 system_pods.go:61] "snapshot-controller-745499f584-2ctxc" [7350c859-7403-48dd-8f17-716af45a66e0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0612 20:13:33.719898   22294 system_pods.go:61] "snapshot-controller-745499f584-flslf" [143d8fc1-b352-4a2d-a199-4c29ea465493] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0612 20:13:33.719909   22294 system_pods.go:61] "storage-provisioner" [5aa128d9-0268-4ed7-9ba8-a3405add5dd5] Running
	I0612 20:13:33.719920   22294 system_pods.go:61] "tiller-deploy-6677d64bcd-wrb4j" [d5a32aea-e711-4681-8246-f238b7566914] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0612 20:13:33.719932   22294 system_pods.go:74] duration metric: took 48.304478ms to wait for pod list to return data ...
	I0612 20:13:33.719946   22294 default_sa.go:34] waiting for default service account to be created ...
	I0612 20:13:33.733254   22294 default_sa.go:45] found service account: "default"
	I0612 20:13:33.733279   22294 default_sa.go:55] duration metric: took 13.322298ms for default service account to be created ...
	I0612 20:13:33.733290   22294 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 20:13:33.763168   22294 system_pods.go:86] 19 kube-system pods found
	I0612 20:13:33.763216   22294 system_pods.go:89] "coredns-7db6d8ff4d-vcczk" [df3fef56-31ac-482e-a39b-29b00592b53b] Running
	I0612 20:13:33.763224   22294 system_pods.go:89] "coredns-7db6d8ff4d-whsws" [ad628dac-001d-4531-89fd-33629dcc54cb] Running
	I0612 20:13:33.763234   22294 system_pods.go:89] "csi-hostpath-attacher-0" [ba878465-f2b1-4c7e-a56e-791040338b12] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0612 20:13:33.763244   22294 system_pods.go:89] "csi-hostpath-resizer-0" [c1fba905-dff2-4f6b-8226-27d1530fe067] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0612 20:13:33.763258   22294 system_pods.go:89] "csi-hostpathplugin-h9np6" [066343ae-5c77-4a5c-b973-ce1972c4816d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0612 20:13:33.763269   22294 system_pods.go:89] "etcd-addons-899843" [6762c9cc-df6c-48de-9bee-553b979bc90e] Running
	I0612 20:13:33.763278   22294 system_pods.go:89] "kube-apiserver-addons-899843" [1b709cc7-14d9-472a-9fd2-14f675696c51] Running
	I0612 20:13:33.763289   22294 system_pods.go:89] "kube-controller-manager-addons-899843" [77707797-5a1b-457f-9628-708c30b7209f] Running
	I0612 20:13:33.763299   22294 system_pods.go:89] "kube-ingress-dns-minikube" [fe4b4575-3547-4019-bc49-d7599aaaedc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0612 20:13:33.763310   22294 system_pods.go:89] "kube-proxy-rbbmx" [07785176-2ce1-4304-992e-8962b08939db] Running
	I0612 20:13:33.763326   22294 system_pods.go:89] "kube-scheduler-addons-899843" [2204b584-b2c5-4c49-924c-17b3552682a1] Running
	I0612 20:13:33.763341   22294 system_pods.go:89] "metrics-server-c59844bb4-g6s5d" [4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 20:13:33.763354   22294 system_pods.go:89] "nvidia-device-plugin-daemonset-7t2hk" [318904a0-3329-4548-9694-082dce3d63ff] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0612 20:13:33.763371   22294 system_pods.go:89] "registry-d4wfp" [4dedad66-548d-4156-a741-4077e86eb02b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0612 20:13:33.763384   22294 system_pods.go:89] "registry-proxy-l4fcl" [947cca02-a2df-4d5e-b84a-0cb7bb05d876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0612 20:13:33.763398   22294 system_pods.go:89] "snapshot-controller-745499f584-2ctxc" [7350c859-7403-48dd-8f17-716af45a66e0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0612 20:13:33.763413   22294 system_pods.go:89] "snapshot-controller-745499f584-flslf" [143d8fc1-b352-4a2d-a199-4c29ea465493] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0612 20:13:33.763424   22294 system_pods.go:89] "storage-provisioner" [5aa128d9-0268-4ed7-9ba8-a3405add5dd5] Running
	I0612 20:13:33.763434   22294 system_pods.go:89] "tiller-deploy-6677d64bcd-wrb4j" [d5a32aea-e711-4681-8246-f238b7566914] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0612 20:13:33.763449   22294 system_pods.go:126] duration metric: took 30.149887ms to wait for k8s-apps to be running ...
	I0612 20:13:33.763461   22294 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 20:13:33.763507   22294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:13:33.812467   22294 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0612 20:13:33.812497   22294 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0612 20:13:33.841382   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.257135351s)
	I0612 20:13:33.841444   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:33.841460   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:33.841811   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:33.841870   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:33.841885   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:33.841901   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:33.841914   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:33.842145   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:33.842179   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:33.842191   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:33.887868   22294 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0612 20:13:33.887899   22294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0612 20:13:33.915703   22294 system_svc.go:56] duration metric: took 152.2322ms WaitForService to wait for kubelet
	I0612 20:13:33.915733   22294 kubeadm.go:576] duration metric: took 11.847043277s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:13:33.915757   22294 node_conditions.go:102] verifying NodePressure condition ...
	I0612 20:13:33.919911   22294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:13:33.919937   22294 node_conditions.go:123] node cpu capacity is 2
	I0612 20:13:33.919952   22294 node_conditions.go:105] duration metric: took 4.189506ms to run NodePressure ...
	I0612 20:13:33.919967   22294 start.go:240] waiting for startup goroutines ...
	I0612 20:13:33.943407   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:33.947951   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:33.982710   22294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0612 20:13:34.139733   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:34.443454   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:34.450793   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:34.641665   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:34.945630   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:34.948863   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:35.139664   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:35.454824   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:35.456398   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:35.502512   22294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.519764711s)
	I0612 20:13:35.502578   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:35.502600   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:35.502924   22294 main.go:141] libmachine: (addons-899843) DBG | Closing plugin on server side
	I0612 20:13:35.502953   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:35.502965   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:35.502973   22294 main.go:141] libmachine: Making call to close driver server
	I0612 20:13:35.502991   22294 main.go:141] libmachine: (addons-899843) Calling .Close
	I0612 20:13:35.503287   22294 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:13:35.503341   22294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:13:35.505898   22294 addons.go:475] Verifying addon gcp-auth=true in "addons-899843"
	I0612 20:13:35.507700   22294 out.go:177] * Verifying gcp-auth addon...
	I0612 20:13:35.509931   22294 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0612 20:13:35.525388   22294 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0612 20:13:35.525422   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:35.641530   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:35.942177   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:35.946076   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:36.013803   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:36.138965   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:36.440838   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:36.446023   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:36.513866   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:36.638619   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:36.941753   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:36.946365   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:37.014220   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:37.138418   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:37.442873   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:37.447477   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:37.513223   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:37.637951   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:37.942361   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:37.946584   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:38.013244   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:38.138381   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:38.441866   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:38.445739   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:38.513621   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:38.638314   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:38.942584   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:38.946229   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:39.014090   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:39.138744   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:39.441243   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:39.446597   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:39.513632   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:39.640106   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:39.942861   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:39.949471   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:40.014821   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:40.139087   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:40.441225   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:40.446503   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:40.513918   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:40.638240   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:40.942437   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:40.945527   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:41.013013   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:41.142787   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:41.442217   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:41.447663   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:41.513968   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:41.639548   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:41.940600   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:41.945947   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:42.013608   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:42.139280   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:42.440944   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:42.445956   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:42.514292   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:42.638014   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:42.941229   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:42.946634   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:43.014081   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:43.139321   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:43.441941   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:43.446329   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:43.514349   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:43.639084   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:43.941170   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:43.946190   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:44.014724   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:44.139088   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:44.442577   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:44.447193   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:44.514675   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:44.641120   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:44.942873   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:44.946663   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:45.013963   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:45.138550   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:45.440758   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:45.445844   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:45.514534   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:45.638634   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:45.941212   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:45.946328   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:46.012685   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:46.139344   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:46.441587   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:46.445240   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:46.514056   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:46.638638   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:46.940912   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:46.945904   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:47.013910   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:47.138654   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:47.442908   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:47.447432   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:47.513849   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:47.640472   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:47.941419   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:47.946209   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:48.018789   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:48.138764   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:48.440771   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:48.446292   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:48.514151   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:48.639066   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:48.941345   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:48.946377   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:49.014053   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:49.140657   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:49.441933   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:49.445867   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:49.514073   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:49.639364   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:49.941848   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:49.947681   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:50.013603   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:50.242808   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:50.565817   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:50.565950   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:50.567151   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:50.649836   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:50.940922   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:50.946884   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:51.014087   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:51.138769   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:51.442652   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:51.445714   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:51.513919   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:51.640943   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:51.941909   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:51.945878   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:52.014488   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:52.138762   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:52.441823   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:52.446036   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:52.514518   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:52.638524   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:52.941873   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:52.946623   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:53.013095   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:53.142776   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:53.442395   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:53.446151   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:53.514756   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:53.638646   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:53.941531   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:53.947504   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:54.014024   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:54.139212   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:54.442459   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:54.446325   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:54.514055   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:54.639028   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:54.941841   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:54.946233   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:55.014189   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:55.138594   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:55.441414   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:55.445444   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:55.513740   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:55.638506   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:55.940824   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:55.946189   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:56.013977   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:56.139086   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:56.440896   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:56.445886   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:56.513793   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:56.639096   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:56.941136   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:56.949066   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:57.015635   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:57.139378   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:57.442176   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:57.446467   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:57.513827   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:57.638854   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:57.941679   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:57.945342   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:58.013397   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:58.138552   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:58.441522   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:58.445725   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:58.513987   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:58.638838   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:58.941278   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:58.946486   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:59.013426   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:59.138295   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:59.441550   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:59.445331   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:13:59.513318   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:13:59.638348   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:13:59.941585   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:13:59.946220   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:00.014210   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:00.138903   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:00.441113   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:00.446761   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:00.513876   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:00.640480   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:00.940504   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:00.945664   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:01.014145   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:01.139636   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:01.441018   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:01.446232   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:01.514364   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:01.639354   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:01.941847   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:01.945697   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:02.013427   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:02.138657   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:02.447121   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:02.452314   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:02.514095   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:02.639822   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:02.942774   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:02.951263   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:03.014003   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:03.139683   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:03.441826   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:03.446139   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:03.514693   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:03.638724   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:03.943020   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:03.946822   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:04.013828   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:04.139433   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:04.441086   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:04.447323   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:04.515602   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:04.641817   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:04.943687   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:04.954339   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:05.013191   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:05.137903   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:05.441578   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:05.445629   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:05.513531   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:05.638493   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:05.958599   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:05.958750   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:06.014059   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:06.139515   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:06.442392   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:06.449111   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:06.515214   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:06.639838   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:06.943109   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:06.946613   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:07.017502   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:07.138996   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:07.440574   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:07.445317   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:07.513449   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:07.639145   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:07.953378   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:07.953536   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:08.013811   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:08.138960   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:08.713266   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:08.713724   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:08.714290   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:08.714445   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:08.943516   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:08.948070   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:09.013504   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:09.139593   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:09.442688   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:09.446923   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:09.513610   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:09.638843   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:09.940651   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:09.945778   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:10.013911   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:10.139012   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:10.440803   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:10.446476   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:10.761684   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:10.769449   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:10.941936   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:10.945829   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:11.013821   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:11.138738   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:11.441256   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:11.447008   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:11.513871   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:11.638665   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:11.952701   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:11.960975   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:12.018214   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:12.139356   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:12.441185   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:12.446335   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:12.513192   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:12.638127   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:12.941682   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:12.946268   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:13.013729   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:13.139454   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:13.440743   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:13.445985   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:13.517764   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:13.638912   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:13.942290   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:13.947821   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:14.014012   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:14.139485   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:14.441683   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:14.445681   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:14.513793   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:14.638925   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:14.941384   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:14.946649   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:15.147627   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:15.149431   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:15.440761   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:15.446473   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:15.516478   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:15.638528   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:15.941121   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:15.946975   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:16.013486   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:16.138455   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:16.441368   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:16.445303   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:16.514138   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:16.649113   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:16.942014   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:16.946324   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:17.013310   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:17.138797   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:17.548125   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:17.550771   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:17.553142   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:17.638890   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:17.941654   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:17.945723   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:18.017588   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:18.138957   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:18.441505   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:18.445779   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:18.514716   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:18.639155   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:18.941899   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:18.946031   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:19.013878   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:19.138875   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:19.441782   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:19.445566   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:19.514353   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:19.647577   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:19.941106   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:19.946421   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:20.017203   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:20.139306   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:20.441441   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:20.446338   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:20.513550   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:20.639121   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:20.942070   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:20.945767   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:21.013485   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:21.141237   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:21.441752   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:21.448683   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:21.513566   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:21.639813   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:21.941647   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:21.945745   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:22.013557   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:22.139139   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:22.442032   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:22.445494   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:22.513783   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:22.639418   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:22.941723   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:22.947366   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:23.014401   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:23.139642   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:23.442233   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:23.454145   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:23.542431   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:23.638907   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:23.941751   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:23.945697   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:24.013794   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:24.139275   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:24.441296   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:24.446058   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:24.514000   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:24.640403   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:24.942045   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:24.946277   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:25.014272   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:25.139640   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:25.442333   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:25.446189   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:25.513604   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:25.638616   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:25.941026   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:25.946637   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:26.014884   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:26.138692   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:26.441023   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:26.446078   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:26.514289   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:26.638085   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:26.941828   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:26.946621   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:27.014721   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:27.138687   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:27.441072   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:27.446479   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:27.514543   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:27.638569   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:27.941814   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:27.945704   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 20:14:28.014133   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:28.150804   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:28.440517   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:28.445587   22294 kapi.go:107] duration metric: took 57.003936809s to wait for kubernetes.io/minikube-addons=registry ...
	I0612 20:14:28.512953   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:28.638516   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:28.941173   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:29.013886   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:29.138672   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:29.441191   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:29.523412   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:29.639897   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:29.940283   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:30.014079   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:30.139757   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:30.440774   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:30.517943   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:30.638459   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:30.941668   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:31.014184   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:31.138395   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:31.441643   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:31.513821   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:31.641887   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:31.941347   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:32.014268   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:32.139415   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:32.441787   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:32.513346   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:32.640964   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:32.942162   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:33.014047   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:33.139659   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:33.442093   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:33.514084   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:33.638871   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:33.941251   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:34.013569   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:34.138392   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:34.441925   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:34.527586   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:34.641739   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:34.940985   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:35.014913   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:35.139837   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:35.441387   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:35.515881   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:35.639895   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:35.942628   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:36.012597   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:36.138674   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:36.444605   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:36.514418   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:36.638222   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:36.941765   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:37.013241   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:37.138652   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:37.441434   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:37.513823   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:37.638579   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:37.941303   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:38.018216   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:38.138765   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:38.450258   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:38.515535   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:38.640664   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:38.944410   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:39.014738   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:39.139967   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:39.441411   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:39.514352   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:39.648884   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:39.942031   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:40.013420   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:40.138683   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:40.441351   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:40.513859   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:40.639624   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:40.941763   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:41.013022   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:41.139219   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:41.442004   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:41.512698   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:41.638306   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:41.941915   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:42.014206   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:42.140001   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:42.441642   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:42.513394   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:42.638585   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:42.945585   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:43.431947   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:43.438693   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:43.444440   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:43.513758   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:43.639210   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:43.943934   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:44.013554   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:44.138319   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:44.441609   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:44.513165   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:44.640206   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:44.941191   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:45.014426   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:45.139340   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:45.443215   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:45.513937   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:45.639045   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:46.128891   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:46.129113   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:46.142407   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:46.443217   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:46.513512   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:46.660955   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:46.941300   22294 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 20:14:47.015201   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:47.147459   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:47.442562   22294 kapi.go:107] duration metric: took 1m16.007662934s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0612 20:14:47.513986   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:47.647004   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:48.013416   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:48.143071   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:48.513973   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:48.639573   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:49.013354   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:49.138390   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:49.514067   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:49.639269   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:50.014268   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:50.139479   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:50.513027   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:50.639459   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:51.014047   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:51.140498   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:51.513279   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:51.640851   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:52.014551   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 20:14:52.145705   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:52.514040   22294 kapi.go:107] duration metric: took 1m17.004104031s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0612 20:14:52.515934   22294 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-899843 cluster.
	I0612 20:14:52.517431   22294 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0612 20:14:52.518834   22294 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0612 20:14:52.642250   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:53.138957   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:53.640128   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:54.140417   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:54.638464   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:55.141403   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:55.638957   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:56.140084   22294 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 20:14:56.639348   22294 kapi.go:107] duration metric: took 1m23.006400375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0612 20:14:56.641302   22294 out.go:177] * Enabled addons: helm-tiller, storage-provisioner, cloud-spanner, yakd, ingress-dns, inspektor-gadget, nvidia-device-plugin, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0612 20:14:56.642492   22294 addons.go:510] duration metric: took 1m34.573770903s for enable addons: enabled=[helm-tiller storage-provisioner cloud-spanner yakd ingress-dns inspektor-gadget nvidia-device-plugin metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0612 20:14:56.642526   22294 start.go:245] waiting for cluster config update ...
	I0612 20:14:56.642541   22294 start.go:254] writing updated cluster config ...
	I0612 20:14:56.642775   22294 ssh_runner.go:195] Run: rm -f paused
	I0612 20:14:56.693145   22294 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 20:14:56.694687   22294 out.go:177] * Done! kubectl is now configured to use "addons-899843" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.162274768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718223638162245753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b75798a-f3cc-4964-9214-21c9d8b9701f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.163474485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2493e1c6-2439-4cf9-9692-caa70851e364 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.163552361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2493e1c6-2439-4cf9-9692-caa70851e364 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.163878871Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3f50da1dc82c3bdc5c4aee9cbe33faf413edf00ec08a86f07293581216d844,PodSandboxId:897b715d5540fce6bfb92cdd4e8e348fd89fe3b9a28ca69d862bb222805813cc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718223473051090169,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-kbtl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3c060ce-d46f-4a37-b318-985519591838,},Annotations:map[string]string{io.kubernetes.container.hash: 4c8d702,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd23008db0bf6f352b7240f729961b64b1e163658cf859b110036cea0b36343,PodSandboxId:a4ce8e3e4607485156cc665da1652d4c57412b86ed986db2939f0b0773956e1e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718223331979871665,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63c525be-66b7-432d-b1ae-2f835c9880fb,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1acc993f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a9f043f430d9bc9b333afe30bad2c4d0fadbd3362a0a47995d93d80a596fdf,PodSandboxId:33fa45c3c5b80da849fd42f7b08ce8abedcb3c4a8c98495f8b4ec4de72270644,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718223303780808234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-2hfkx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c88103f0-de17-4f17-a1dd-fa97f936c891,},Annotations:map[string]string{io.kubernetes.container.hash: 81bd51d0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d4b4e6a74844aa3fb50a9b67334de1ccc7db3684015519cc4309f6862b0350,PodSandboxId:873613a097a7909e0e77bac97e43f11f54296876dad008e182b2f00acaa5f6e6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718223291588963247,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-68z9r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fb378bcc-ffc3-427d-8d9c-3d4e10666a6f,},Annotations:map[string]string{io.kubernetes.container.hash: e1cd2245,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d4dd138630826914efef88030e35569cd5d20f0b2197c3bcdded7e1beaa4eb,PodSandboxId:383811ae9f7055532f19ae5088244cd03a0bb0990fbb1532a48364ea60d890fe,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171822
3257660480529,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-mwtps,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a01c7a18-474f-45e3-906d-4e7b54800ba0,},Annotations:map[string]string{io.kubernetes.container.hash: fe6613a5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e20f759a6abff3ddce914eaf3504b894db1e7b7f70afe41e69d452e5fc1dfe3,PodSandboxId:c49212b8d6af6f3d2a1b8fb049683c74fe9ee6ff60e6af3a96335877190cb1c9,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718223246450294727,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-g6s5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27,},Annotations:map[string]string{io.kubernetes.container.hash: 5669584c,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9757ad6bc984243b22d2f31c4395538db3da62772209d217bf69cae679a63a,PodSandboxId:ddf8ae86868ac83ae5e3874de4f61780a1c39e617b553fcb2d35426d0f88a699,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718223209119052384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa128d9-0268-4ed7-9ba8-a3405add5dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 34461c91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb2eb9b48d57af25f2941f433f8710963ad414fa4886b1ecb969e2b098189f9,PodSandboxId:5702f8892f7515b81c0766a74d27a0032b536ec4957822a00fa534a2f1a06013,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718223205859284625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-whsws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad628dac-001d-4531-89fd-33629dcc54cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d7fd139,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c28efa5649762365aaf662619e5ef12712149626320de929ff8f3d0913b91,PodSand
boxId:3cde88d922a86baa00b3b490e2f52a80f9ace76f2d5aa8dc497a477acbbf7435,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718223203479106836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbbmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07785176-2ce1-4304-992e-8962b08939db,},Annotations:map[string]string{io.kubernetes.container.hash: e1528d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d5a6b5dc6138a6fe7531c084808d8d1872a0a5bad983b681b1dea0b1283c97,PodSandboxId:80b1160c82ccdf458c755a43f93
905583a5e90d04d49a4b7fdc59afe4508e485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718223183845513001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87bd34de49164d7e23d3bd85d31e57a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d11557b4b02db631833f6cf99c4c112b3830f7f51a7c6df64e2b87f28c3dbb36,PodSandboxId:a39da441a9fa91e95f11c5f46b31b8228f32e202998f
1d4340a08e23e02ead01,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718223183815641904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478ea678f98fdcf850e28e6b8d10601f,},Annotations:map[string]string{io.kubernetes.container.hash: f5425390,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df6fd69f56389dcb1fb1abcd816b7212dccc260e9e123a6a0582bb35082f34d,PodSandboxId:76cfdcd865926bb5ab09cff69860418be695006b5268baad8ab4a00d44c78
b5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718223183730729809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd228d1130bcad7d53d31f82588ba53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a312cd5dbab5c630a6d9070588273ef333b2e11e4341e8003d515698a4f42c8d,PodSandboxId:4e67a4ce941c1583ba92539c39a261b550a6b8860c438989a2d
314acc04c1250,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718223183717163642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27583d060d65b458ede39de8e114234,},Annotations:map[string]string{io.kubernetes.container.hash: 2184cd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2493e1c6-2439-4cf9-9692-caa70851e364 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.205425472Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67d4d991-e23c-47f8-98ef-05ed0fde249b name=/runtime.v1.RuntimeService/Version
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.205516927Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67d4d991-e23c-47f8-98ef-05ed0fde249b name=/runtime.v1.RuntimeService/Version
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.206470791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=299b603c-15b9-4db0-9c5c-256eebcb4c66 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.208403797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718223638208245507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=299b603c-15b9-4db0-9c5c-256eebcb4c66 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.209007274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47ccf267-5a16-464f-bf83-00362b03fe3b name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.209067593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47ccf267-5a16-464f-bf83-00362b03fe3b name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.209344779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3f50da1dc82c3bdc5c4aee9cbe33faf413edf00ec08a86f07293581216d844,PodSandboxId:897b715d5540fce6bfb92cdd4e8e348fd89fe3b9a28ca69d862bb222805813cc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718223473051090169,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-kbtl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3c060ce-d46f-4a37-b318-985519591838,},Annotations:map[string]string{io.kubernetes.container.hash: 4c8d702,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd23008db0bf6f352b7240f729961b64b1e163658cf859b110036cea0b36343,PodSandboxId:a4ce8e3e4607485156cc665da1652d4c57412b86ed986db2939f0b0773956e1e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718223331979871665,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63c525be-66b7-432d-b1ae-2f835c9880fb,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1acc993f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a9f043f430d9bc9b333afe30bad2c4d0fadbd3362a0a47995d93d80a596fdf,PodSandboxId:33fa45c3c5b80da849fd42f7b08ce8abedcb3c4a8c98495f8b4ec4de72270644,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718223303780808234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-2hfkx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c88103f0-de17-4f17-a1dd-fa97f936c891,},Annotations:map[string]string{io.kubernetes.container.hash: 81bd51d0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d4b4e6a74844aa3fb50a9b67334de1ccc7db3684015519cc4309f6862b0350,PodSandboxId:873613a097a7909e0e77bac97e43f11f54296876dad008e182b2f00acaa5f6e6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718223291588963247,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-68z9r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fb378bcc-ffc3-427d-8d9c-3d4e10666a6f,},Annotations:map[string]string{io.kubernetes.container.hash: e1cd2245,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d4dd138630826914efef88030e35569cd5d20f0b2197c3bcdded7e1beaa4eb,PodSandboxId:383811ae9f7055532f19ae5088244cd03a0bb0990fbb1532a48364ea60d890fe,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171822
3257660480529,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-mwtps,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a01c7a18-474f-45e3-906d-4e7b54800ba0,},Annotations:map[string]string{io.kubernetes.container.hash: fe6613a5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e20f759a6abff3ddce914eaf3504b894db1e7b7f70afe41e69d452e5fc1dfe3,PodSandboxId:c49212b8d6af6f3d2a1b8fb049683c74fe9ee6ff60e6af3a96335877190cb1c9,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718223246450294727,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-g6s5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27,},Annotations:map[string]string{io.kubernetes.container.hash: 5669584c,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9757ad6bc984243b22d2f31c4395538db3da62772209d217bf69cae679a63a,PodSandboxId:ddf8ae86868ac83ae5e3874de4f61780a1c39e617b553fcb2d35426d0f88a699,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718223209119052384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa128d9-0268-4ed7-9ba8-a3405add5dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 34461c91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb2eb9b48d57af25f2941f433f8710963ad414fa4886b1ecb969e2b098189f9,PodSandboxId:5702f8892f7515b81c0766a74d27a0032b536ec4957822a00fa534a2f1a06013,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718223205859284625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-whsws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad628dac-001d-4531-89fd-33629dcc54cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d7fd139,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c28efa5649762365aaf662619e5ef12712149626320de929ff8f3d0913b91,PodSand
boxId:3cde88d922a86baa00b3b490e2f52a80f9ace76f2d5aa8dc497a477acbbf7435,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718223203479106836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbbmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07785176-2ce1-4304-992e-8962b08939db,},Annotations:map[string]string{io.kubernetes.container.hash: e1528d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d5a6b5dc6138a6fe7531c084808d8d1872a0a5bad983b681b1dea0b1283c97,PodSandboxId:80b1160c82ccdf458c755a43f93
905583a5e90d04d49a4b7fdc59afe4508e485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718223183845513001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87bd34de49164d7e23d3bd85d31e57a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d11557b4b02db631833f6cf99c4c112b3830f7f51a7c6df64e2b87f28c3dbb36,PodSandboxId:a39da441a9fa91e95f11c5f46b31b8228f32e202998f
1d4340a08e23e02ead01,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718223183815641904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478ea678f98fdcf850e28e6b8d10601f,},Annotations:map[string]string{io.kubernetes.container.hash: f5425390,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df6fd69f56389dcb1fb1abcd816b7212dccc260e9e123a6a0582bb35082f34d,PodSandboxId:76cfdcd865926bb5ab09cff69860418be695006b5268baad8ab4a00d44c78
b5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718223183730729809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd228d1130bcad7d53d31f82588ba53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a312cd5dbab5c630a6d9070588273ef333b2e11e4341e8003d515698a4f42c8d,PodSandboxId:4e67a4ce941c1583ba92539c39a261b550a6b8860c438989a2d
314acc04c1250,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718223183717163642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27583d060d65b458ede39de8e114234,},Annotations:map[string]string{io.kubernetes.container.hash: 2184cd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47ccf267-5a16-464f-bf83-00362b03fe3b name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.243854048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7bae59da-ba51-4865-94e1-a6fc17bc9a7e name=/runtime.v1.RuntimeService/Version
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.243944597Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7bae59da-ba51-4865-94e1-a6fc17bc9a7e name=/runtime.v1.RuntimeService/Version
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.245276911Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5eb8e0d6-ea5a-49f4-ba29-35ba7f87693e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.246824964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718223638246800624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5eb8e0d6-ea5a-49f4-ba29-35ba7f87693e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.247756955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11b3b631-9179-41a8-8bc6-01315bc2cf63 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.247810722Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11b3b631-9179-41a8-8bc6-01315bc2cf63 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.248085029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3f50da1dc82c3bdc5c4aee9cbe33faf413edf00ec08a86f07293581216d844,PodSandboxId:897b715d5540fce6bfb92cdd4e8e348fd89fe3b9a28ca69d862bb222805813cc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718223473051090169,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-kbtl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3c060ce-d46f-4a37-b318-985519591838,},Annotations:map[string]string{io.kubernetes.container.hash: 4c8d702,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd23008db0bf6f352b7240f729961b64b1e163658cf859b110036cea0b36343,PodSandboxId:a4ce8e3e4607485156cc665da1652d4c57412b86ed986db2939f0b0773956e1e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718223331979871665,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63c525be-66b7-432d-b1ae-2f835c9880fb,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1acc993f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a9f043f430d9bc9b333afe30bad2c4d0fadbd3362a0a47995d93d80a596fdf,PodSandboxId:33fa45c3c5b80da849fd42f7b08ce8abedcb3c4a8c98495f8b4ec4de72270644,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718223303780808234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-2hfkx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c88103f0-de17-4f17-a1dd-fa97f936c891,},Annotations:map[string]string{io.kubernetes.container.hash: 81bd51d0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d4b4e6a74844aa3fb50a9b67334de1ccc7db3684015519cc4309f6862b0350,PodSandboxId:873613a097a7909e0e77bac97e43f11f54296876dad008e182b2f00acaa5f6e6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718223291588963247,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-68z9r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fb378bcc-ffc3-427d-8d9c-3d4e10666a6f,},Annotations:map[string]string{io.kubernetes.container.hash: e1cd2245,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d4dd138630826914efef88030e35569cd5d20f0b2197c3bcdded7e1beaa4eb,PodSandboxId:383811ae9f7055532f19ae5088244cd03a0bb0990fbb1532a48364ea60d890fe,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171822
3257660480529,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-mwtps,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a01c7a18-474f-45e3-906d-4e7b54800ba0,},Annotations:map[string]string{io.kubernetes.container.hash: fe6613a5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e20f759a6abff3ddce914eaf3504b894db1e7b7f70afe41e69d452e5fc1dfe3,PodSandboxId:c49212b8d6af6f3d2a1b8fb049683c74fe9ee6ff60e6af3a96335877190cb1c9,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718223246450294727,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-g6s5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27,},Annotations:map[string]string{io.kubernetes.container.hash: 5669584c,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9757ad6bc984243b22d2f31c4395538db3da62772209d217bf69cae679a63a,PodSandboxId:ddf8ae86868ac83ae5e3874de4f61780a1c39e617b553fcb2d35426d0f88a699,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718223209119052384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa128d9-0268-4ed7-9ba8-a3405add5dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 34461c91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb2eb9b48d57af25f2941f433f8710963ad414fa4886b1ecb969e2b098189f9,PodSandboxId:5702f8892f7515b81c0766a74d27a0032b536ec4957822a00fa534a2f1a06013,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718223205859284625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-whsws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad628dac-001d-4531-89fd-33629dcc54cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d7fd139,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c28efa5649762365aaf662619e5ef12712149626320de929ff8f3d0913b91,PodSand
boxId:3cde88d922a86baa00b3b490e2f52a80f9ace76f2d5aa8dc497a477acbbf7435,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718223203479106836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbbmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07785176-2ce1-4304-992e-8962b08939db,},Annotations:map[string]string{io.kubernetes.container.hash: e1528d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d5a6b5dc6138a6fe7531c084808d8d1872a0a5bad983b681b1dea0b1283c97,PodSandboxId:80b1160c82ccdf458c755a43f93
905583a5e90d04d49a4b7fdc59afe4508e485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718223183845513001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87bd34de49164d7e23d3bd85d31e57a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d11557b4b02db631833f6cf99c4c112b3830f7f51a7c6df64e2b87f28c3dbb36,PodSandboxId:a39da441a9fa91e95f11c5f46b31b8228f32e202998f
1d4340a08e23e02ead01,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718223183815641904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478ea678f98fdcf850e28e6b8d10601f,},Annotations:map[string]string{io.kubernetes.container.hash: f5425390,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df6fd69f56389dcb1fb1abcd816b7212dccc260e9e123a6a0582bb35082f34d,PodSandboxId:76cfdcd865926bb5ab09cff69860418be695006b5268baad8ab4a00d44c78
b5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718223183730729809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd228d1130bcad7d53d31f82588ba53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a312cd5dbab5c630a6d9070588273ef333b2e11e4341e8003d515698a4f42c8d,PodSandboxId:4e67a4ce941c1583ba92539c39a261b550a6b8860c438989a2d
314acc04c1250,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718223183717163642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27583d060d65b458ede39de8e114234,},Annotations:map[string]string{io.kubernetes.container.hash: 2184cd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11b3b631-9179-41a8-8bc6-01315bc2cf63 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.289557495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12d644d1-b406-430b-98df-114932fbf2e0 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.289637986Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12d644d1-b406-430b-98df-114932fbf2e0 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.290779063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81afbd8e-16c1-4491-b411-e20a7c2b4d43 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.292101530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718223638292076127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81afbd8e-16c1-4491-b411-e20a7c2b4d43 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.292619966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0550de18-7a4b-4918-b3c4-03d274935a8a name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.292670568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0550de18-7a4b-4918-b3c4-03d274935a8a name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:20:38 addons-899843 crio[685]: time="2024-06-12 20:20:38.292945666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3f50da1dc82c3bdc5c4aee9cbe33faf413edf00ec08a86f07293581216d844,PodSandboxId:897b715d5540fce6bfb92cdd4e8e348fd89fe3b9a28ca69d862bb222805813cc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718223473051090169,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-kbtl7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3c060ce-d46f-4a37-b318-985519591838,},Annotations:map[string]string{io.kubernetes.container.hash: 4c8d702,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd23008db0bf6f352b7240f729961b64b1e163658cf859b110036cea0b36343,PodSandboxId:a4ce8e3e4607485156cc665da1652d4c57412b86ed986db2939f0b0773956e1e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718223331979871665,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63c525be-66b7-432d-b1ae-2f835c9880fb,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1acc993f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a9f043f430d9bc9b333afe30bad2c4d0fadbd3362a0a47995d93d80a596fdf,PodSandboxId:33fa45c3c5b80da849fd42f7b08ce8abedcb3c4a8c98495f8b4ec4de72270644,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718223303780808234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-2hfkx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c88103f0-de17-4f17-a1dd-fa97f936c891,},Annotations:map[string]string{io.kubernetes.container.hash: 81bd51d0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d4b4e6a74844aa3fb50a9b67334de1ccc7db3684015519cc4309f6862b0350,PodSandboxId:873613a097a7909e0e77bac97e43f11f54296876dad008e182b2f00acaa5f6e6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718223291588963247,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-68z9r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fb378bcc-ffc3-427d-8d9c-3d4e10666a6f,},Annotations:map[string]string{io.kubernetes.container.hash: e1cd2245,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d4dd138630826914efef88030e35569cd5d20f0b2197c3bcdded7e1beaa4eb,PodSandboxId:383811ae9f7055532f19ae5088244cd03a0bb0990fbb1532a48364ea60d890fe,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171822
3257660480529,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-mwtps,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a01c7a18-474f-45e3-906d-4e7b54800ba0,},Annotations:map[string]string{io.kubernetes.container.hash: fe6613a5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e20f759a6abff3ddce914eaf3504b894db1e7b7f70afe41e69d452e5fc1dfe3,PodSandboxId:c49212b8d6af6f3d2a1b8fb049683c74fe9ee6ff60e6af3a96335877190cb1c9,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718223246450294727,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-g6s5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27,},Annotations:map[string]string{io.kubernetes.container.hash: 5669584c,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9757ad6bc984243b22d2f31c4395538db3da62772209d217bf69cae679a63a,PodSandboxId:ddf8ae86868ac83ae5e3874de4f61780a1c39e617b553fcb2d35426d0f88a699,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718223209119052384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa128d9-0268-4ed7-9ba8-a3405add5dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 34461c91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbb2eb9b48d57af25f2941f433f8710963ad414fa4886b1ecb969e2b098189f9,PodSandboxId:5702f8892f7515b81c0766a74d27a0032b536ec4957822a00fa534a2f1a06013,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718223205859284625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-whsws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad628dac-001d-4531-89fd-33629dcc54cb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d7fd139,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c28efa5649762365aaf662619e5ef12712149626320de929ff8f3d0913b91,PodSand
boxId:3cde88d922a86baa00b3b490e2f52a80f9ace76f2d5aa8dc497a477acbbf7435,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718223203479106836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbbmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07785176-2ce1-4304-992e-8962b08939db,},Annotations:map[string]string{io.kubernetes.container.hash: e1528d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d5a6b5dc6138a6fe7531c084808d8d1872a0a5bad983b681b1dea0b1283c97,PodSandboxId:80b1160c82ccdf458c755a43f93
905583a5e90d04d49a4b7fdc59afe4508e485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718223183845513001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a87bd34de49164d7e23d3bd85d31e57a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d11557b4b02db631833f6cf99c4c112b3830f7f51a7c6df64e2b87f28c3dbb36,PodSandboxId:a39da441a9fa91e95f11c5f46b31b8228f32e202998f
1d4340a08e23e02ead01,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718223183815641904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478ea678f98fdcf850e28e6b8d10601f,},Annotations:map[string]string{io.kubernetes.container.hash: f5425390,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df6fd69f56389dcb1fb1abcd816b7212dccc260e9e123a6a0582bb35082f34d,PodSandboxId:76cfdcd865926bb5ab09cff69860418be695006b5268baad8ab4a00d44c78
b5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718223183730729809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd228d1130bcad7d53d31f82588ba53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a312cd5dbab5c630a6d9070588273ef333b2e11e4341e8003d515698a4f42c8d,PodSandboxId:4e67a4ce941c1583ba92539c39a261b550a6b8860c438989a2d
314acc04c1250,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718223183717163642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-899843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27583d060d65b458ede39de8e114234,},Annotations:map[string]string{io.kubernetes.container.hash: 2184cd2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0550de18-7a4b-4918-b3c4-03d274935a8a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	af3f50da1dc82       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 2 minutes ago       Running             hello-world-app           0                   897b715d5540f       hello-world-app-86c47465fc-kbtl7
	1bd23008db0bf       docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa                         5 minutes ago       Running             nginx                     0                   a4ce8e3e46074       nginx
	35a9f043f430d       ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5                   5 minutes ago       Running             headlamp                  0                   33fa45c3c5b80       headlamp-7fc69f7444-2hfkx
	87d4b4e6a7484       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   873613a097a79       gcp-auth-5db96cd9b4-68z9r
	71d4dd1386308       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         6 minutes ago       Running             yakd                      0                   383811ae9f705       yakd-dashboard-5ddbf7d777-mwtps
	7e20f759a6abf       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   c49212b8d6af6       metrics-server-c59844bb4-g6s5d
	8a9757ad6bc98       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   ddf8ae86868ac       storage-provisioner
	bbb2eb9b48d57       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   5702f8892f751       coredns-7db6d8ff4d-whsws
	af9c28efa5649       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                        7 minutes ago       Running             kube-proxy                0                   3cde88d922a86       kube-proxy-rbbmx
	c9d5a6b5dc613       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                        7 minutes ago       Running             kube-scheduler            0                   80b1160c82ccd       kube-scheduler-addons-899843
	d11557b4b02db       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                        7 minutes ago       Running             kube-apiserver            0                   a39da441a9fa9       kube-apiserver-addons-899843
	3df6fd69f5638       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                        7 minutes ago       Running             kube-controller-manager   0                   76cfdcd865926       kube-controller-manager-addons-899843
	a312cd5dbab5c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   4e67a4ce941c1       etcd-addons-899843
	
	
	==> coredns [bbb2eb9b48d57af25f2941f433f8710963ad414fa4886b1ecb969e2b098189f9] <==
	[INFO] 10.244.0.9:44984 - 63898 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001537668s
	[INFO] 10.244.0.9:55734 - 33224 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009839s
	[INFO] 10.244.0.9:55734 - 6858 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114016s
	[INFO] 10.244.0.9:51890 - 8270 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188916s
	[INFO] 10.244.0.9:51890 - 10828 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00047844s
	[INFO] 10.244.0.9:46906 - 24528 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000180967s
	[INFO] 10.244.0.9:46906 - 40658 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000199381s
	[INFO] 10.244.0.9:39398 - 25333 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000054258s
	[INFO] 10.244.0.9:39398 - 9675 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00007112s
	[INFO] 10.244.0.9:45545 - 35049 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102468s
	[INFO] 10.244.0.9:45545 - 39403 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000221204s
	[INFO] 10.244.0.9:60573 - 22396 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035317s
	[INFO] 10.244.0.9:60573 - 44914 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004122s
	[INFO] 10.244.0.9:41404 - 38014 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000155102s
	[INFO] 10.244.0.9:41404 - 8048 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000307046s
	[INFO] 10.244.0.22:53845 - 34394 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000296173s
	[INFO] 10.244.0.22:50119 - 31106 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00011825s
	[INFO] 10.244.0.22:60392 - 4096 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122548s
	[INFO] 10.244.0.22:53159 - 29292 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000058811s
	[INFO] 10.244.0.22:60239 - 41880 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010106s
	[INFO] 10.244.0.22:53011 - 5038 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000199657s
	[INFO] 10.244.0.22:47815 - 7584 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000478038s
	[INFO] 10.244.0.22:43708 - 60949 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000770248s
	[INFO] 10.244.0.25:42302 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00036584s
	[INFO] 10.244.0.25:39976 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000210016s
	
	
	==> describe nodes <==
	Name:               addons-899843
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-899843
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=addons-899843
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T20_13_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-899843
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:13:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-899843
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:20:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:18:15 +0000   Wed, 12 Jun 2024 20:13:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:18:15 +0000   Wed, 12 Jun 2024 20:13:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:18:15 +0000   Wed, 12 Jun 2024 20:13:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:18:15 +0000   Wed, 12 Jun 2024 20:13:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    addons-899843
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7720e2d2b9d4fac92e9d34a7e19b889
	  System UUID:                c7720e2d-2b9d-4fac-92e9-d34a7e19b889
	  Boot ID:                    d7e9cbad-9bfc-4e95-97e5-e442875e4a37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-kbtl7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  gcp-auth                    gcp-auth-5db96cd9b4-68z9r                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m3s
	  headlamp                    headlamp-7fc69f7444-2hfkx                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kube-system                 coredns-7db6d8ff4d-whsws                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m16s
	  kube-system                 etcd-addons-899843                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-apiserver-addons-899843             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-controller-manager-addons-899843    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-proxy-rbbmx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  kube-system                 kube-scheduler-addons-899843             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-mwtps          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m35s (x8 over 7m36s)  kubelet          Node addons-899843 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s (x8 over 7m36s)  kubelet          Node addons-899843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s (x7 over 7m36s)  kubelet          Node addons-899843 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m29s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m29s                  kubelet          Node addons-899843 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s                  kubelet          Node addons-899843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s                  kubelet          Node addons-899843 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m28s                  kubelet          Node addons-899843 status is now: NodeReady
	  Normal  RegisteredNode           7m17s                  node-controller  Node addons-899843 event: Registered Node addons-899843 in Controller
	
	
	==> dmesg <==
	[  +5.001851] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.098927] kauditd_printk_skb: 120 callbacks suppressed
	[  +5.087273] kauditd_printk_skb: 76 callbacks suppressed
	[ +13.652101] kauditd_printk_skb: 19 callbacks suppressed
	[Jun12 20:14] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.364142] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.275660] kauditd_printk_skb: 23 callbacks suppressed
	[ +10.110740] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.736161] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.016679] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.728139] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.336233] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.162446] kauditd_printk_skb: 12 callbacks suppressed
	[Jun12 20:15] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.213681] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.878218] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.144634] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.094372] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.155851] kauditd_printk_skb: 36 callbacks suppressed
	[ +23.693912] kauditd_printk_skb: 5 callbacks suppressed
	[Jun12 20:16] kauditd_printk_skb: 8 callbacks suppressed
	[ +30.594921] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.854653] kauditd_printk_skb: 33 callbacks suppressed
	[Jun12 20:17] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.214368] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a312cd5dbab5c630a6d9070588273ef333b2e11e4341e8003d515698a4f42c8d] <==
	{"level":"info","ts":"2024-06-12T20:14:46.105479Z","caller":"traceutil/trace.go:171","msg":"trace[159866388] linearizableReadLoop","detail":"{readStateIndex:1181; appliedIndex:1180; }","duration":"376.741899ms","start":"2024-06-12T20:14:45.728722Z","end":"2024-06-12T20:14:46.105464Z","steps":["trace[159866388] 'read index received'  (duration: 376.486582ms)","trace[159866388] 'applied index is now lower than readState.Index'  (duration: 254.536µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-12T20:14:46.105567Z","caller":"traceutil/trace.go:171","msg":"trace[995288748] transaction","detail":"{read_only:false; response_revision:1146; number_of_response:1; }","duration":"395.553724ms","start":"2024-06-12T20:14:45.710007Z","end":"2024-06-12T20:14:46.105561Z","steps":["trace[995288748] 'process raft request'  (duration: 395.253738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:14:46.105648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:14:45.709995Z","time spent":"395.591476ms","remote":"127.0.0.1:37460","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-ljwsf.17d85af8725877a6\" mod_revision:1140 > success:<request_put:<key:\"/registry/events/gadget/gadget-ljwsf.17d85af8725877a6\" value_size:693 lease:2691622117465690094 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-ljwsf.17d85af8725877a6\" > >"}
	{"level":"warn","ts":"2024-06-12T20:14:46.10593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"377.209447ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-06-12T20:14:46.105956Z","caller":"traceutil/trace.go:171","msg":"trace[798071367] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1146; }","duration":"377.254431ms","start":"2024-06-12T20:14:45.728693Z","end":"2024-06-12T20:14:46.105948Z","steps":["trace[798071367] 'agreement among raft nodes before linearized reading'  (duration: 377.118408ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:14:46.105975Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:14:45.728681Z","time spent":"377.289922ms","remote":"127.0.0.1:37536","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-06-12T20:14:46.106149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.038664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-06-12T20:14:46.106166Z","caller":"traceutil/trace.go:171","msg":"trace[168276800] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1146; }","duration":"186.073519ms","start":"2024-06-12T20:14:45.920088Z","end":"2024-06-12T20:14:46.106161Z","steps":["trace[168276800] 'agreement among raft nodes before linearized reading'  (duration: 186.005157ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:14:46.106631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.343984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-06-12T20:14:46.10668Z","caller":"traceutil/trace.go:171","msg":"trace[1284212033] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1146; }","duration":"113.416319ms","start":"2024-06-12T20:14:45.993256Z","end":"2024-06-12T20:14:46.106672Z","steps":["trace[1284212033] 'agreement among raft nodes before linearized reading'  (duration: 113.047971ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:14:57.400401Z","caller":"traceutil/trace.go:171","msg":"trace[1762200910] transaction","detail":"{read_only:false; response_revision:1220; number_of_response:1; }","duration":"105.985912ms","start":"2024-06-12T20:14:57.294341Z","end":"2024-06-12T20:14:57.400327Z","steps":["trace[1762200910] 'process raft request'  (duration: 105.377641ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:15:01.706419Z","caller":"traceutil/trace.go:171","msg":"trace[655074709] transaction","detail":"{read_only:false; response_revision:1259; number_of_response:1; }","duration":"125.651414ms","start":"2024-06-12T20:15:01.580755Z","end":"2024-06-12T20:15:01.706407Z","steps":["trace[655074709] 'process raft request'  (duration: 125.501366ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:15:03.495171Z","caller":"traceutil/trace.go:171","msg":"trace[1327267216] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"128.856372ms","start":"2024-06-12T20:15:03.366298Z","end":"2024-06-12T20:15:03.495154Z","steps":["trace[1327267216] 'process raft request'  (duration: 128.751804ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:16:00.390923Z","caller":"traceutil/trace.go:171","msg":"trace[2004317318] transaction","detail":"{read_only:false; response_revision:1598; number_of_response:1; }","duration":"408.786624ms","start":"2024-06-12T20:15:59.982115Z","end":"2024-06-12T20:16:00.390902Z","steps":["trace[2004317318] 'process raft request'  (duration: 408.688703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:16:00.391094Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:15:59.982101Z","time spent":"408.908653ms","remote":"127.0.0.1:52670","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1592 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-06-12T20:16:00.391536Z","caller":"traceutil/trace.go:171","msg":"trace[1448742879] linearizableReadLoop","detail":"{readStateIndex:1654; appliedIndex:1654; }","duration":"354.07715ms","start":"2024-06-12T20:16:00.037441Z","end":"2024-06-12T20:16:00.391518Z","steps":["trace[1448742879] 'read index received'  (duration: 354.069625ms)","trace[1448742879] 'applied index is now lower than readState.Index'  (duration: 6.611µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T20:16:00.391673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.223549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6032"}
	{"level":"info","ts":"2024-06-12T20:16:00.391731Z","caller":"traceutil/trace.go:171","msg":"trace[65560386] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1598; }","duration":"354.306674ms","start":"2024-06-12T20:16:00.037413Z","end":"2024-06-12T20:16:00.391719Z","steps":["trace[65560386] 'agreement among raft nodes before linearized reading'  (duration: 354.174219ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:16:00.391752Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:16:00.037398Z","time spent":"354.349686ms","remote":"127.0.0.1:37562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":6055,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-06-12T20:16:31.544433Z","caller":"traceutil/trace.go:171","msg":"trace[752769622] linearizableReadLoop","detail":"{readStateIndex:1761; appliedIndex:1760; }","duration":"202.08711ms","start":"2024-06-12T20:16:31.342248Z","end":"2024-06-12T20:16:31.544335Z","steps":["trace[752769622] 'read index received'  (duration: 201.937756ms)","trace[752769622] 'applied index is now lower than readState.Index'  (duration: 148.941µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-12T20:16:31.544544Z","caller":"traceutil/trace.go:171","msg":"trace[1527852028] transaction","detail":"{read_only:false; response_revision:1698; number_of_response:1; }","duration":"213.010261ms","start":"2024-06-12T20:16:31.331514Z","end":"2024-06-12T20:16:31.544524Z","steps":["trace[1527852028] 'process raft request'  (duration: 212.707631ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:16:31.544816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.495708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-12T20:16:31.544876Z","caller":"traceutil/trace.go:171","msg":"trace[1467260830] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1698; }","duration":"202.640335ms","start":"2024-06-12T20:16:31.342221Z","end":"2024-06-12T20:16:31.544862Z","steps":["trace[1467260830] 'agreement among raft nodes before linearized reading'  (duration: 202.355357ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:16:31.544835Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.484473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6126"}
	{"level":"info","ts":"2024-06-12T20:16:31.545024Z","caller":"traceutil/trace.go:171","msg":"trace[2011878052] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1698; }","duration":"198.71668ms","start":"2024-06-12T20:16:31.346302Z","end":"2024-06-12T20:16:31.545019Z","steps":["trace[2011878052] 'agreement among raft nodes before linearized reading'  (duration: 198.449424ms)"],"step_count":1}
	
	
	==> gcp-auth [87d4b4e6a74844aa3fb50a9b67334de1ccc7db3684015519cc4309f6862b0350] <==
	2024/06/12 20:14:51 GCP Auth Webhook started!
	2024/06/12 20:14:57 Ready to marshal response ...
	2024/06/12 20:14:57 Ready to write response ...
	2024/06/12 20:14:57 Ready to marshal response ...
	2024/06/12 20:14:57 Ready to write response ...
	2024/06/12 20:14:57 Ready to marshal response ...
	2024/06/12 20:14:57 Ready to write response ...
	2024/06/12 20:15:01 Ready to marshal response ...
	2024/06/12 20:15:01 Ready to write response ...
	2024/06/12 20:15:07 Ready to marshal response ...
	2024/06/12 20:15:07 Ready to write response ...
	2024/06/12 20:15:14 Ready to marshal response ...
	2024/06/12 20:15:14 Ready to write response ...
	2024/06/12 20:15:14 Ready to marshal response ...
	2024/06/12 20:15:14 Ready to write response ...
	2024/06/12 20:15:27 Ready to marshal response ...
	2024/06/12 20:15:27 Ready to write response ...
	2024/06/12 20:15:27 Ready to marshal response ...
	2024/06/12 20:15:27 Ready to write response ...
	2024/06/12 20:15:53 Ready to marshal response ...
	2024/06/12 20:15:53 Ready to write response ...
	2024/06/12 20:16:23 Ready to marshal response ...
	2024/06/12 20:16:23 Ready to write response ...
	2024/06/12 20:17:48 Ready to marshal response ...
	2024/06/12 20:17:48 Ready to write response ...
	
	
	==> kernel <==
	 20:20:38 up 8 min,  0 users,  load average: 0.09, 0.65, 0.49
	Linux addons-899843 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d11557b4b02db631833f6cf99c4c112b3830f7f51a7c6df64e2b87f28c3dbb36] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0612 20:15:08.384000       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.176.248:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.176.248:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.176.248:443: connect: connection refused
	E0612 20:15:08.390501       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.176.248:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.176.248:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.176.248:443: connect: connection refused
	I0612 20:15:08.466294       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0612 20:15:21.456841       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0612 20:15:22.496198       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0612 20:15:27.031564       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0612 20:15:27.275413       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.48.92"}
	E0612 20:15:43.305008       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0612 20:16:08.103617       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0612 20:16:40.369801       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0612 20:16:40.370025       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0612 20:16:40.391605       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0612 20:16:40.391660       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0612 20:16:40.400276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0612 20:16:40.400340       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0612 20:16:40.407436       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0612 20:16:40.407510       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0612 20:16:40.449479       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0612 20:16:40.449585       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0612 20:16:40.483903       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W0612 20:16:41.400678       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0612 20:16:41.449707       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0612 20:16:41.468119       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0612 20:17:49.049627       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.234.57"}
	
	
	==> kube-controller-manager [3df6fd69f56389dcb1fb1abcd816b7212dccc260e9e123a6a0582bb35082f34d] <==
	W0612 20:18:32.895559       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:18:32.895731       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:18:44.556538       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:18:44.556608       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:18:50.263120       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:18:50.263150       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:18:59.513489       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:18:59.513544       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:19:20.200176       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:19:20.200512       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:19:24.046040       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:19:24.046103       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:19:34.837659       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:19:34.837806       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:19:36.399582       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:19:36.399689       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:19:57.546118       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:19:57.546312       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:20:05.643700       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:20:05.643819       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:20:10.622801       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:20:10.622911       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:20:15.168856       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:20:15.169042       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0612 20:20:37.224341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="8.42µs"
	
	
	==> kube-proxy [af9c28efa5649762365aaf662619e5ef12712149626320de929ff8f3d0913b91] <==
	I0612 20:13:24.342635       1 server_linux.go:69] "Using iptables proxy"
	I0612 20:13:24.371568       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.248"]
	I0612 20:13:24.502205       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:13:24.502235       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:13:24.502250       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:13:24.516483       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:13:24.516656       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:13:24.516671       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:13:24.518303       1 config.go:192] "Starting service config controller"
	I0612 20:13:24.518317       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:13:24.518415       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:13:24.518421       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:13:24.518726       1 config.go:319] "Starting node config controller"
	I0612 20:13:24.518731       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:13:24.618851       1 shared_informer.go:320] Caches are synced for node config
	I0612 20:13:24.618885       1 shared_informer.go:320] Caches are synced for service config
	I0612 20:13:24.618913       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c9d5a6b5dc6138a6fe7531c084808d8d1872a0a5bad983b681b1dea0b1283c97] <==
	W0612 20:13:06.352517       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 20:13:06.352547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 20:13:06.352604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 20:13:06.352633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 20:13:07.227225       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0612 20:13:07.227451       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0612 20:13:07.256252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 20:13:07.256304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 20:13:07.317503       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 20:13:07.317649       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 20:13:07.388248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0612 20:13:07.388439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0612 20:13:07.388342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 20:13:07.388521       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 20:13:07.402810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 20:13:07.403112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 20:13:07.445227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0612 20:13:07.445274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0612 20:13:07.485454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 20:13:07.485502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 20:13:07.597681       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0612 20:13:07.597834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 20:13:07.661194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:13:07.661329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 20:13:09.027704       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 20:17:54 addons-899843 kubelet[1283]: I0612 20:17:54.298631    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3"} err="failed to get container status \"b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3\": rpc error: code = NotFound desc = could not find container \"b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3\": container with ID starting with b8a308f10269e8216d44a0e135eb51a1b25c1a225ddf71549ca8c9562feeafa3 not found: ID does not exist"
	Jun 12 20:17:55 addons-899843 kubelet[1283]: I0612 20:17:55.154915    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf" path="/var/lib/kubelet/pods/9cb4ae32-4770-4fcf-82f1-4167b8d1e4cf/volumes"
	Jun 12 20:18:09 addons-899843 kubelet[1283]: E0612 20:18:09.191335    1283 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:18:09 addons-899843 kubelet[1283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:18:09 addons-899843 kubelet[1283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:18:09 addons-899843 kubelet[1283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:18:09 addons-899843 kubelet[1283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:18:10 addons-899843 kubelet[1283]: I0612 20:18:10.228739    1283 scope.go:117] "RemoveContainer" containerID="a1fa06f1bdd4a48a9b93aa90febc4db617b0ecda48739d7f3565bb2c002addf2"
	Jun 12 20:18:10 addons-899843 kubelet[1283]: I0612 20:18:10.245576    1283 scope.go:117] "RemoveContainer" containerID="b8579183a5e6a5a33d78933ba88a50c931b668d00923a05e9f82f1ea19f15fbc"
	Jun 12 20:19:09 addons-899843 kubelet[1283]: E0612 20:19:09.189936    1283 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:19:09 addons-899843 kubelet[1283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:19:09 addons-899843 kubelet[1283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:19:09 addons-899843 kubelet[1283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:19:09 addons-899843 kubelet[1283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:20:09 addons-899843 kubelet[1283]: E0612 20:20:09.188666    1283 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:20:09 addons-899843 kubelet[1283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:20:09 addons-899843 kubelet[1283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:20:09 addons-899843 kubelet[1283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:20:09 addons-899843 kubelet[1283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:20:38 addons-899843 kubelet[1283]: I0612 20:20:38.610562    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt4mf\" (UniqueName: \"kubernetes.io/projected/4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27-kube-api-access-xt4mf\") pod \"4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27\" (UID: \"4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27\") "
	Jun 12 20:20:38 addons-899843 kubelet[1283]: I0612 20:20:38.610624    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27-tmp-dir\") pod \"4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27\" (UID: \"4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27\") "
	Jun 12 20:20:38 addons-899843 kubelet[1283]: I0612 20:20:38.610984    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27" (UID: "4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jun 12 20:20:38 addons-899843 kubelet[1283]: I0612 20:20:38.621319    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27-kube-api-access-xt4mf" (OuterVolumeSpecName: "kube-api-access-xt4mf") pod "4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27" (UID: "4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27"). InnerVolumeSpecName "kube-api-access-xt4mf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 12 20:20:38 addons-899843 kubelet[1283]: I0612 20:20:38.711567    1283 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27-tmp-dir\") on node \"addons-899843\" DevicePath \"\""
	Jun 12 20:20:38 addons-899843 kubelet[1283]: I0612 20:20:38.711611    1283 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xt4mf\" (UniqueName: \"kubernetes.io/projected/4ce5e9e4-af04-4282-a3a8-e6fb01c7eb27-kube-api-access-xt4mf\") on node \"addons-899843\" DevicePath \"\""
	
	
	==> storage-provisioner [8a9757ad6bc984243b22d2f31c4395538db3da62772209d217bf69cae679a63a] <==
	I0612 20:13:30.366230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0612 20:13:30.472184       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0612 20:13:30.472298       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0612 20:13:30.505156       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0612 20:13:30.506469       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-899843_6b6be76f-bfec-4661-b11f-f7c147a1abd8!
	I0612 20:13:30.508133       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"225df958-7d42-4f80-ad26-74574bae21bd", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-899843_6b6be76f-bfec-4661-b11f-f7c147a1abd8 became leader
	I0612 20:13:30.609532       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-899843_6b6be76f-bfec-4661-b11f-f7c147a1abd8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-899843 -n addons-899843
helpers_test.go:261: (dbg) Run:  kubectl --context addons-899843 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (318.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-899843
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-899843: exit status 82 (2m0.476453932s)

                                                
                                                
-- stdout --
	* Stopping node "addons-899843"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-899843" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-899843
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-899843: exit status 11 (21.58526697s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.248:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-899843" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-899843
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-899843: exit status 11 (6.144515954s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.248:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-899843" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-899843
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-899843: exit status 11 (6.147452611s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.248:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-899843" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image rm gcr.io/google-containers/addon-resizer:functional-944676 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 image rm gcr.io/google-containers/addon-resizer:functional-944676 --alsologtostderr: (2.372338865s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image ls
functional_test.go:402: expected "gcr.io/google-containers/addon-resizer:functional-944676" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (2.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 node stop m02 -v=7 --alsologtostderr
E0612 20:33:10.534290   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:34:32.454804   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:34:56.704606   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.492530343s)

                                                
                                                
-- stdout --
	* Stopping node "ha-844626-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:33:06.010483   36857 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:33:06.011412   36857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:33:06.011424   36857 out.go:304] Setting ErrFile to fd 2...
	I0612 20:33:06.011429   36857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:33:06.011614   36857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:33:06.011957   36857 mustload.go:65] Loading cluster: ha-844626
	I0612 20:33:06.012440   36857 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:33:06.012463   36857 stop.go:39] StopHost: ha-844626-m02
	I0612 20:33:06.012875   36857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:33:06.012947   36857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:33:06.031657   36857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35215
	I0612 20:33:06.033142   36857 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:33:06.033645   36857 main.go:141] libmachine: Using API Version  1
	I0612 20:33:06.033678   36857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:33:06.034063   36857 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:33:06.036356   36857 out.go:177] * Stopping node "ha-844626-m02"  ...
	I0612 20:33:06.037773   36857 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0612 20:33:06.037800   36857 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:33:06.038046   36857 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0612 20:33:06.038066   36857 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:33:06.041322   36857 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:33:06.041734   36857 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:33:06.041761   36857 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:33:06.041884   36857 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:33:06.042050   36857 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:33:06.042190   36857 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:33:06.042322   36857 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	I0612 20:33:06.131085   36857 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0612 20:33:06.188508   36857 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0612 20:33:06.249636   36857 main.go:141] libmachine: Stopping "ha-844626-m02"...
	I0612 20:33:06.249679   36857 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:33:06.251075   36857 main.go:141] libmachine: (ha-844626-m02) Calling .Stop
	I0612 20:33:06.254237   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 0/120
	I0612 20:33:07.256502   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 1/120
	I0612 20:33:08.258365   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 2/120
	I0612 20:33:09.259801   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 3/120
	I0612 20:33:10.261824   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 4/120
	I0612 20:33:11.264022   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 5/120
	I0612 20:33:12.265286   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 6/120
	I0612 20:33:13.266711   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 7/120
	I0612 20:33:14.269166   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 8/120
	I0612 20:33:15.270593   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 9/120
	I0612 20:33:16.272766   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 10/120
	I0612 20:33:17.275218   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 11/120
	I0612 20:33:18.277534   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 12/120
	I0612 20:33:19.279656   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 13/120
	I0612 20:33:20.280991   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 14/120
	I0612 20:33:21.282503   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 15/120
	I0612 20:33:22.284087   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 16/120
	I0612 20:33:23.285523   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 17/120
	I0612 20:33:24.286827   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 18/120
	I0612 20:33:25.288313   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 19/120
	I0612 20:33:26.290470   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 20/120
	I0612 20:33:27.291888   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 21/120
	I0612 20:33:28.294022   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 22/120
	I0612 20:33:29.295589   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 23/120
	I0612 20:33:30.297186   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 24/120
	I0612 20:33:31.299097   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 25/120
	I0612 20:33:32.300874   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 26/120
	I0612 20:33:33.302373   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 27/120
	I0612 20:33:34.304480   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 28/120
	I0612 20:33:35.305644   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 29/120
	I0612 20:33:36.307891   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 30/120
	I0612 20:33:37.309245   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 31/120
	I0612 20:33:38.310497   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 32/120
	I0612 20:33:39.311841   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 33/120
	I0612 20:33:40.314071   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 34/120
	I0612 20:33:41.315874   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 35/120
	I0612 20:33:42.317838   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 36/120
	I0612 20:33:43.320243   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 37/120
	I0612 20:33:44.321827   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 38/120
	I0612 20:33:45.323788   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 39/120
	I0612 20:33:46.326012   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 40/120
	I0612 20:33:47.327441   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 41/120
	I0612 20:33:48.329848   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 42/120
	I0612 20:33:49.331186   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 43/120
	I0612 20:33:50.333060   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 44/120
	I0612 20:33:51.335099   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 45/120
	I0612 20:33:52.336348   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 46/120
	I0612 20:33:53.337819   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 47/120
	I0612 20:33:54.339126   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 48/120
	I0612 20:33:55.340923   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 49/120
	I0612 20:33:56.343483   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 50/120
	I0612 20:33:57.346051   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 51/120
	I0612 20:33:58.347335   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 52/120
	I0612 20:33:59.349577   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 53/120
	I0612 20:34:00.350855   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 54/120
	I0612 20:34:01.352691   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 55/120
	I0612 20:34:02.354419   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 56/120
	I0612 20:34:03.355678   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 57/120
	I0612 20:34:04.357686   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 58/120
	I0612 20:34:05.358798   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 59/120
	I0612 20:34:06.360879   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 60/120
	I0612 20:34:07.362494   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 61/120
	I0612 20:34:08.363928   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 62/120
	I0612 20:34:09.366114   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 63/120
	I0612 20:34:10.367921   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 64/120
	I0612 20:34:11.369420   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 65/120
	I0612 20:34:12.370586   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 66/120
	I0612 20:34:13.372235   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 67/120
	I0612 20:34:14.373852   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 68/120
	I0612 20:34:15.376047   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 69/120
	I0612 20:34:16.378338   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 70/120
	I0612 20:34:17.379987   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 71/120
	I0612 20:34:18.381588   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 72/120
	I0612 20:34:19.383362   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 73/120
	I0612 20:34:20.385731   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 74/120
	I0612 20:34:21.387709   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 75/120
	I0612 20:34:22.389046   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 76/120
	I0612 20:34:23.390380   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 77/120
	I0612 20:34:24.391918   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 78/120
	I0612 20:34:25.393644   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 79/120
	I0612 20:34:26.395772   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 80/120
	I0612 20:34:27.397096   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 81/120
	I0612 20:34:28.398554   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 82/120
	I0612 20:34:29.400099   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 83/120
	I0612 20:34:30.401586   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 84/120
	I0612 20:34:31.403006   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 85/120
	I0612 20:34:32.404461   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 86/120
	I0612 20:34:33.405810   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 87/120
	I0612 20:34:34.407568   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 88/120
	I0612 20:34:35.409546   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 89/120
	I0612 20:34:36.411548   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 90/120
	I0612 20:34:37.412830   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 91/120
	I0612 20:34:38.414161   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 92/120
	I0612 20:34:39.416314   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 93/120
	I0612 20:34:40.417723   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 94/120
	I0612 20:34:41.419664   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 95/120
	I0612 20:34:42.421468   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 96/120
	I0612 20:34:43.422644   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 97/120
	I0612 20:34:44.423906   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 98/120
	I0612 20:34:45.425593   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 99/120
	I0612 20:34:46.427686   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 100/120
	I0612 20:34:47.429961   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 101/120
	I0612 20:34:48.431483   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 102/120
	I0612 20:34:49.434018   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 103/120
	I0612 20:34:50.435990   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 104/120
	I0612 20:34:51.437938   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 105/120
	I0612 20:34:52.439422   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 106/120
	I0612 20:34:53.440724   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 107/120
	I0612 20:34:54.442158   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 108/120
	I0612 20:34:55.443686   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 109/120
	I0612 20:34:56.445049   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 110/120
	I0612 20:34:57.446897   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 111/120
	I0612 20:34:58.448338   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 112/120
	I0612 20:34:59.449635   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 113/120
	I0612 20:35:00.450933   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 114/120
	I0612 20:35:01.453072   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 115/120
	I0612 20:35:02.454439   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 116/120
	I0612 20:35:03.455998   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 117/120
	I0612 20:35:04.457337   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 118/120
	I0612 20:35:05.458700   36857 main.go:141] libmachine: (ha-844626-m02) Waiting for machine to stop 119/120
	I0612 20:35:06.459559   36857 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0612 20:35:06.459741   36857 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-844626 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr: exit status 3 (19.145829563s)

                                                
                                                
-- stdout --
	ha-844626
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-844626-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:35:06.502805   37295 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:35:06.503365   37295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:06.503416   37295 out.go:304] Setting ErrFile to fd 2...
	I0612 20:35:06.503432   37295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:06.503865   37295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:35:06.504215   37295 out.go:298] Setting JSON to false
	I0612 20:35:06.504293   37295 mustload.go:65] Loading cluster: ha-844626
	I0612 20:35:06.504371   37295 notify.go:220] Checking for updates...
	I0612 20:35:06.504838   37295 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:35:06.504860   37295 status.go:255] checking status of ha-844626 ...
	I0612 20:35:06.505238   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:06.505290   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:06.520447   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46333
	I0612 20:35:06.520884   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:06.521512   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:06.521532   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:06.521882   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:06.522081   37295 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:35:06.523677   37295 status.go:330] ha-844626 host status = "Running" (err=<nil>)
	I0612 20:35:06.523693   37295 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:06.523991   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:06.524029   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:06.539848   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43663
	I0612 20:35:06.540246   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:06.540715   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:06.540733   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:06.541146   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:06.541359   37295 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:35:06.544488   37295 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:06.545002   37295 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:06.545040   37295 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:06.545221   37295 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:06.545637   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:06.545691   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:06.560387   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42575
	I0612 20:35:06.560875   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:06.561434   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:06.561488   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:06.561897   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:06.562149   37295 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:35:06.562386   37295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:06.562426   37295 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:35:06.566018   37295 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:06.566536   37295 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:06.566566   37295 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:06.566726   37295 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:35:06.566920   37295 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:35:06.567107   37295 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:35:06.567318   37295 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:35:06.648291   37295 ssh_runner.go:195] Run: systemctl --version
	I0612 20:35:06.657199   37295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:06.673999   37295 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:35:06.674023   37295 api_server.go:166] Checking apiserver status ...
	I0612 20:35:06.674067   37295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:35:06.690104   37295 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0612 20:35:06.701156   37295 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:35:06.701211   37295 ssh_runner.go:195] Run: ls
	I0612 20:35:06.706547   37295 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:35:06.711098   37295 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:35:06.711122   37295 status.go:422] ha-844626 apiserver status = Running (err=<nil>)
	I0612 20:35:06.711145   37295 status.go:257] ha-844626 status: &{Name:ha-844626 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:35:06.711165   37295 status.go:255] checking status of ha-844626-m02 ...
	I0612 20:35:06.711604   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:06.711649   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:06.726887   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32993
	I0612 20:35:06.727317   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:06.727760   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:06.727784   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:06.728114   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:06.728310   37295 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:35:06.729839   37295 status.go:330] ha-844626-m02 host status = "Running" (err=<nil>)
	I0612 20:35:06.729865   37295 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:06.730286   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:06.730325   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:06.744672   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0612 20:35:06.745071   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:06.745526   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:06.745544   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:06.745836   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:06.746032   37295 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:35:06.748888   37295 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:06.749335   37295 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:06.749367   37295 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:06.749560   37295 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:06.749938   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:06.749999   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:06.765484   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0612 20:35:06.765876   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:06.766332   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:06.766356   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:06.766617   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:06.766795   37295 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:35:06.766979   37295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:06.766995   37295 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:35:06.770435   37295 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:06.770799   37295 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:06.770844   37295 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:06.771023   37295 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:35:06.771236   37295 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:35:06.771415   37295 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:35:06.771540   37295 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	W0612 20:35:25.231472   37295 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.108:22: connect: no route to host
	W0612 20:35:25.231593   37295 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	E0612 20:35:25.231612   37295 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:25.231628   37295 status.go:257] ha-844626-m02 status: &{Name:ha-844626-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0612 20:35:25.231666   37295 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:25.231682   37295 status.go:255] checking status of ha-844626-m03 ...
	I0612 20:35:25.232090   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:25.232150   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:25.247302   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45533
	I0612 20:35:25.247754   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:25.248258   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:25.248286   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:25.248572   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:25.248768   37295 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:35:25.250592   37295 status.go:330] ha-844626-m03 host status = "Running" (err=<nil>)
	I0612 20:35:25.250608   37295 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:35:25.250949   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:25.250986   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:25.266379   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0612 20:35:25.266818   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:25.267330   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:25.267366   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:25.267658   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:25.267812   37295 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:35:25.270633   37295 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:25.271012   37295 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:35:25.271080   37295 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:25.271257   37295 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:35:25.271662   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:25.271707   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:25.286276   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0612 20:35:25.286739   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:25.287307   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:25.287327   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:25.287671   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:25.287875   37295 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:35:25.288091   37295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:25.288109   37295 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:35:25.291130   37295 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:25.291638   37295 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:35:25.291681   37295 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:25.291848   37295 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:35:25.292040   37295 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:35:25.292212   37295 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:35:25.292387   37295 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:35:25.377112   37295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:25.400252   37295 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:35:25.400283   37295 api_server.go:166] Checking apiserver status ...
	I0612 20:35:25.400318   37295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:35:25.416828   37295 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0612 20:35:25.427903   37295 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:35:25.427962   37295 ssh_runner.go:195] Run: ls
	I0612 20:35:25.435664   37295 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:35:25.441925   37295 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:35:25.441953   37295 status.go:422] ha-844626-m03 apiserver status = Running (err=<nil>)
	I0612 20:35:25.441965   37295 status.go:257] ha-844626-m03 status: &{Name:ha-844626-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:35:25.441985   37295 status.go:255] checking status of ha-844626-m04 ...
	I0612 20:35:25.442280   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:25.442313   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:25.457364   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0612 20:35:25.457754   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:25.458246   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:25.458264   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:25.458558   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:25.458753   37295 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:35:25.460411   37295 status.go:330] ha-844626-m04 host status = "Running" (err=<nil>)
	I0612 20:35:25.460426   37295 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:35:25.460732   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:25.460765   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:25.475041   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44405
	I0612 20:35:25.475528   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:25.475939   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:25.475959   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:25.476254   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:25.476422   37295 main.go:141] libmachine: (ha-844626-m04) Calling .GetIP
	I0612 20:35:25.478948   37295 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:25.479373   37295 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:35:25.479412   37295 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:25.479581   37295 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:35:25.479980   37295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:25.480022   37295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:25.495189   37295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0612 20:35:25.495698   37295 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:25.496178   37295 main.go:141] libmachine: Using API Version  1
	I0612 20:35:25.496203   37295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:25.496546   37295 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:25.496702   37295 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:35:25.496903   37295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:25.496920   37295 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:35:25.499486   37295 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:25.499939   37295 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:35:25.499969   37295 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:25.500132   37295 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:35:25.500320   37295 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:35:25.500497   37295 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:35:25.500657   37295 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	I0612 20:35:25.587785   37295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:25.604715   37295 status.go:257] ha-844626-m04 status: &{Name:ha-844626-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-844626 -n ha-844626
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-844626 logs -n 25: (1.415390322s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile43944605/001/cp-test_ha-844626-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626:/home/docker/cp-test_ha-844626-m03_ha-844626.txt                     |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626 sudo cat                                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m03_ha-844626.txt                               |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m02:/home/docker/cp-test_ha-844626-m03_ha-844626-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m02 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m03_ha-844626-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04:/home/docker/cp-test_ha-844626-m03_ha-844626-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m04 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m03_ha-844626-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp testdata/cp-test.txt                                              | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile43944605/001/cp-test_ha-844626-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626:/home/docker/cp-test_ha-844626-m04_ha-844626.txt                     |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626 sudo cat                                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626.txt                               |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m02:/home/docker/cp-test_ha-844626-m04_ha-844626-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m02 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03:/home/docker/cp-test_ha-844626-m04_ha-844626-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m03 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-844626 node stop m02 -v=7                                                   | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 20:27:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 20:27:40.972412   32635 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:27:40.972656   32635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:27:40.972668   32635 out.go:304] Setting ErrFile to fd 2...
	I0612 20:27:40.972675   32635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:27:40.973281   32635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:27:40.974350   32635 out.go:298] Setting JSON to false
	I0612 20:27:40.975165   32635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4206,"bootTime":1718219855,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 20:27:40.975250   32635 start.go:139] virtualization: kvm guest
	I0612 20:27:40.977294   32635 out.go:177] * [ha-844626] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 20:27:40.979019   32635 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 20:27:40.980460   32635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 20:27:40.979033   32635 notify.go:220] Checking for updates...
	I0612 20:27:40.982970   32635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:27:40.984198   32635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:27:40.985582   32635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 20:27:40.987005   32635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 20:27:40.988431   32635 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 20:27:41.022803   32635 out.go:177] * Using the kvm2 driver based on user configuration
	I0612 20:27:41.024103   32635 start.go:297] selected driver: kvm2
	I0612 20:27:41.024119   32635 start.go:901] validating driver "kvm2" against <nil>
	I0612 20:27:41.024129   32635 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 20:27:41.024807   32635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:27:41.024879   32635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 20:27:41.039138   32635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 20:27:41.039192   32635 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 20:27:41.039394   32635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:27:41.039449   32635 cni.go:84] Creating CNI manager for ""
	I0612 20:27:41.039460   32635 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0612 20:27:41.039467   32635 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0612 20:27:41.039521   32635 start.go:340] cluster config:
	{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0612 20:27:41.039608   32635 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:27:41.041371   32635 out.go:177] * Starting "ha-844626" primary control-plane node in "ha-844626" cluster
	I0612 20:27:41.042634   32635 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:27:41.042666   32635 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 20:27:41.042675   32635 cache.go:56] Caching tarball of preloaded images
	I0612 20:27:41.042737   32635 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 20:27:41.042747   32635 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 20:27:41.043053   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:27:41.043073   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json: {Name:mked60f99278039b9c24d295779696b34306771a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:27:41.043256   32635 start.go:360] acquireMachinesLock for ha-844626: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 20:27:41.043302   32635 start.go:364] duration metric: took 22.479µs to acquireMachinesLock for "ha-844626"
	I0612 20:27:41.043320   32635 start.go:93] Provisioning new machine with config: &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:27:41.043378   32635 start.go:125] createHost starting for "" (driver="kvm2")
	I0612 20:27:41.045005   32635 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 20:27:41.045132   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:27:41.045181   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:27:41.059056   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0612 20:27:41.059495   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:27:41.059994   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:27:41.060014   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:27:41.060344   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:27:41.060538   32635 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:27:41.060668   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:27:41.060852   32635 start.go:159] libmachine.API.Create for "ha-844626" (driver="kvm2")
	I0612 20:27:41.060882   32635 client.go:168] LocalClient.Create starting
	I0612 20:27:41.060923   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 20:27:41.060965   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:27:41.060988   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:27:41.061068   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 20:27:41.061101   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:27:41.061122   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:27:41.061147   32635 main.go:141] libmachine: Running pre-create checks...
	I0612 20:27:41.061161   32635 main.go:141] libmachine: (ha-844626) Calling .PreCreateCheck
	I0612 20:27:41.061488   32635 main.go:141] libmachine: (ha-844626) Calling .GetConfigRaw
	I0612 20:27:41.061817   32635 main.go:141] libmachine: Creating machine...
	I0612 20:27:41.061831   32635 main.go:141] libmachine: (ha-844626) Calling .Create
	I0612 20:27:41.061947   32635 main.go:141] libmachine: (ha-844626) Creating KVM machine...
	I0612 20:27:41.063282   32635 main.go:141] libmachine: (ha-844626) DBG | found existing default KVM network
	I0612 20:27:41.063927   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:41.063790   32658 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0612 20:27:41.063954   32635 main.go:141] libmachine: (ha-844626) DBG | created network xml: 
	I0612 20:27:41.063966   32635 main.go:141] libmachine: (ha-844626) DBG | <network>
	I0612 20:27:41.063974   32635 main.go:141] libmachine: (ha-844626) DBG |   <name>mk-ha-844626</name>
	I0612 20:27:41.063980   32635 main.go:141] libmachine: (ha-844626) DBG |   <dns enable='no'/>
	I0612 20:27:41.063984   32635 main.go:141] libmachine: (ha-844626) DBG |   
	I0612 20:27:41.063991   32635 main.go:141] libmachine: (ha-844626) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0612 20:27:41.063997   32635 main.go:141] libmachine: (ha-844626) DBG |     <dhcp>
	I0612 20:27:41.064003   32635 main.go:141] libmachine: (ha-844626) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0612 20:27:41.064013   32635 main.go:141] libmachine: (ha-844626) DBG |     </dhcp>
	I0612 20:27:41.064025   32635 main.go:141] libmachine: (ha-844626) DBG |   </ip>
	I0612 20:27:41.064038   32635 main.go:141] libmachine: (ha-844626) DBG |   
	I0612 20:27:41.064055   32635 main.go:141] libmachine: (ha-844626) DBG | </network>
	I0612 20:27:41.064063   32635 main.go:141] libmachine: (ha-844626) DBG | 
	I0612 20:27:41.069240   32635 main.go:141] libmachine: (ha-844626) DBG | trying to create private KVM network mk-ha-844626 192.168.39.0/24...
	I0612 20:27:41.133290   32635 main.go:141] libmachine: (ha-844626) DBG | private KVM network mk-ha-844626 192.168.39.0/24 created
	I0612 20:27:41.133329   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:41.133261   32658 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:27:41.133343   32635 main.go:141] libmachine: (ha-844626) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626 ...
	I0612 20:27:41.133365   32635 main.go:141] libmachine: (ha-844626) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 20:27:41.133409   32635 main.go:141] libmachine: (ha-844626) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 20:27:41.359777   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:41.359654   32658 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa...
	I0612 20:27:41.706884   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:41.706757   32658 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/ha-844626.rawdisk...
	I0612 20:27:41.706926   32635 main.go:141] libmachine: (ha-844626) DBG | Writing magic tar header
	I0612 20:27:41.706936   32635 main.go:141] libmachine: (ha-844626) DBG | Writing SSH key tar header
	I0612 20:27:41.706949   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:41.706868   32658 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626 ...
	I0612 20:27:41.707033   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626 (perms=drwx------)
	I0612 20:27:41.707051   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626
	I0612 20:27:41.707063   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 20:27:41.707074   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 20:27:41.707085   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:27:41.707092   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 20:27:41.707107   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 20:27:41.707116   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins
	I0612 20:27:41.707125   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home
	I0612 20:27:41.707146   32635 main.go:141] libmachine: (ha-844626) DBG | Skipping /home - not owner
	I0612 20:27:41.707197   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 20:27:41.707234   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 20:27:41.707248   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 20:27:41.707267   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 20:27:41.707281   32635 main.go:141] libmachine: (ha-844626) Creating domain...
	I0612 20:27:41.708187   32635 main.go:141] libmachine: (ha-844626) define libvirt domain using xml: 
	I0612 20:27:41.708209   32635 main.go:141] libmachine: (ha-844626) <domain type='kvm'>
	I0612 20:27:41.708219   32635 main.go:141] libmachine: (ha-844626)   <name>ha-844626</name>
	I0612 20:27:41.708230   32635 main.go:141] libmachine: (ha-844626)   <memory unit='MiB'>2200</memory>
	I0612 20:27:41.708241   32635 main.go:141] libmachine: (ha-844626)   <vcpu>2</vcpu>
	I0612 20:27:41.708252   32635 main.go:141] libmachine: (ha-844626)   <features>
	I0612 20:27:41.708263   32635 main.go:141] libmachine: (ha-844626)     <acpi/>
	I0612 20:27:41.708273   32635 main.go:141] libmachine: (ha-844626)     <apic/>
	I0612 20:27:41.708284   32635 main.go:141] libmachine: (ha-844626)     <pae/>
	I0612 20:27:41.708308   32635 main.go:141] libmachine: (ha-844626)     
	I0612 20:27:41.708321   32635 main.go:141] libmachine: (ha-844626)   </features>
	I0612 20:27:41.708332   32635 main.go:141] libmachine: (ha-844626)   <cpu mode='host-passthrough'>
	I0612 20:27:41.708340   32635 main.go:141] libmachine: (ha-844626)   
	I0612 20:27:41.708352   32635 main.go:141] libmachine: (ha-844626)   </cpu>
	I0612 20:27:41.708363   32635 main.go:141] libmachine: (ha-844626)   <os>
	I0612 20:27:41.708373   32635 main.go:141] libmachine: (ha-844626)     <type>hvm</type>
	I0612 20:27:41.708384   32635 main.go:141] libmachine: (ha-844626)     <boot dev='cdrom'/>
	I0612 20:27:41.708396   32635 main.go:141] libmachine: (ha-844626)     <boot dev='hd'/>
	I0612 20:27:41.708412   32635 main.go:141] libmachine: (ha-844626)     <bootmenu enable='no'/>
	I0612 20:27:41.708423   32635 main.go:141] libmachine: (ha-844626)   </os>
	I0612 20:27:41.708434   32635 main.go:141] libmachine: (ha-844626)   <devices>
	I0612 20:27:41.708445   32635 main.go:141] libmachine: (ha-844626)     <disk type='file' device='cdrom'>
	I0612 20:27:41.708459   32635 main.go:141] libmachine: (ha-844626)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/boot2docker.iso'/>
	I0612 20:27:41.708475   32635 main.go:141] libmachine: (ha-844626)       <target dev='hdc' bus='scsi'/>
	I0612 20:27:41.708499   32635 main.go:141] libmachine: (ha-844626)       <readonly/>
	I0612 20:27:41.708510   32635 main.go:141] libmachine: (ha-844626)     </disk>
	I0612 20:27:41.708518   32635 main.go:141] libmachine: (ha-844626)     <disk type='file' device='disk'>
	I0612 20:27:41.708533   32635 main.go:141] libmachine: (ha-844626)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 20:27:41.708549   32635 main.go:141] libmachine: (ha-844626)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/ha-844626.rawdisk'/>
	I0612 20:27:41.708562   32635 main.go:141] libmachine: (ha-844626)       <target dev='hda' bus='virtio'/>
	I0612 20:27:41.708576   32635 main.go:141] libmachine: (ha-844626)     </disk>
	I0612 20:27:41.708589   32635 main.go:141] libmachine: (ha-844626)     <interface type='network'>
	I0612 20:27:41.708600   32635 main.go:141] libmachine: (ha-844626)       <source network='mk-ha-844626'/>
	I0612 20:27:41.708613   32635 main.go:141] libmachine: (ha-844626)       <model type='virtio'/>
	I0612 20:27:41.708623   32635 main.go:141] libmachine: (ha-844626)     </interface>
	I0612 20:27:41.708634   32635 main.go:141] libmachine: (ha-844626)     <interface type='network'>
	I0612 20:27:41.708647   32635 main.go:141] libmachine: (ha-844626)       <source network='default'/>
	I0612 20:27:41.708660   32635 main.go:141] libmachine: (ha-844626)       <model type='virtio'/>
	I0612 20:27:41.708671   32635 main.go:141] libmachine: (ha-844626)     </interface>
	I0612 20:27:41.708681   32635 main.go:141] libmachine: (ha-844626)     <serial type='pty'>
	I0612 20:27:41.708691   32635 main.go:141] libmachine: (ha-844626)       <target port='0'/>
	I0612 20:27:41.708703   32635 main.go:141] libmachine: (ha-844626)     </serial>
	I0612 20:27:41.708719   32635 main.go:141] libmachine: (ha-844626)     <console type='pty'>
	I0612 20:27:41.708730   32635 main.go:141] libmachine: (ha-844626)       <target type='serial' port='0'/>
	I0612 20:27:41.708743   32635 main.go:141] libmachine: (ha-844626)     </console>
	I0612 20:27:41.708755   32635 main.go:141] libmachine: (ha-844626)     <rng model='virtio'>
	I0612 20:27:41.708767   32635 main.go:141] libmachine: (ha-844626)       <backend model='random'>/dev/random</backend>
	I0612 20:27:41.708779   32635 main.go:141] libmachine: (ha-844626)     </rng>
	I0612 20:27:41.708794   32635 main.go:141] libmachine: (ha-844626)     
	I0612 20:27:41.708805   32635 main.go:141] libmachine: (ha-844626)     
	I0612 20:27:41.708814   32635 main.go:141] libmachine: (ha-844626)   </devices>
	I0612 20:27:41.708823   32635 main.go:141] libmachine: (ha-844626) </domain>
	I0612 20:27:41.708833   32635 main.go:141] libmachine: (ha-844626) 
	I0612 20:27:41.712846   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:5b:21:1b in network default
	I0612 20:27:41.713412   32635 main.go:141] libmachine: (ha-844626) Ensuring networks are active...
	I0612 20:27:41.713434   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:41.714106   32635 main.go:141] libmachine: (ha-844626) Ensuring network default is active
	I0612 20:27:41.714440   32635 main.go:141] libmachine: (ha-844626) Ensuring network mk-ha-844626 is active
	I0612 20:27:41.715208   32635 main.go:141] libmachine: (ha-844626) Getting domain xml...
	I0612 20:27:41.716030   32635 main.go:141] libmachine: (ha-844626) Creating domain...
	I0612 20:27:42.877106   32635 main.go:141] libmachine: (ha-844626) Waiting to get IP...
	I0612 20:27:42.877937   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:42.878329   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:42.878352   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:42.878306   32658 retry.go:31] will retry after 251.928711ms: waiting for machine to come up
	I0612 20:27:43.132009   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:43.132528   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:43.132550   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:43.132471   32658 retry.go:31] will retry after 324.411916ms: waiting for machine to come up
	I0612 20:27:43.458826   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:43.459192   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:43.459216   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:43.459164   32658 retry.go:31] will retry after 316.141039ms: waiting for machine to come up
	I0612 20:27:43.776450   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:43.776803   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:43.776829   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:43.776773   32658 retry.go:31] will retry after 586.686885ms: waiting for machine to come up
	I0612 20:27:44.365246   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:44.365624   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:44.365655   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:44.365582   32658 retry.go:31] will retry after 589.180902ms: waiting for machine to come up
	I0612 20:27:44.956283   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:44.956690   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:44.956724   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:44.956654   32658 retry.go:31] will retry after 585.086589ms: waiting for machine to come up
	I0612 20:27:45.543269   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:45.543749   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:45.543786   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:45.543679   32658 retry.go:31] will retry after 723.01632ms: waiting for machine to come up
	I0612 20:27:46.268214   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:46.268654   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:46.268679   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:46.268627   32658 retry.go:31] will retry after 1.107858591s: waiting for machine to come up
	I0612 20:27:47.377938   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:47.378439   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:47.378464   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:47.378403   32658 retry.go:31] will retry after 1.845151914s: waiting for machine to come up
	I0612 20:27:49.224676   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:49.225081   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:49.225103   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:49.225017   32658 retry.go:31] will retry after 2.326337363s: waiting for machine to come up
	I0612 20:27:51.553288   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:51.553759   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:51.553788   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:51.553714   32658 retry.go:31] will retry after 2.857778141s: waiting for machine to come up
	I0612 20:27:54.414736   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:54.415212   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:54.415240   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:54.415137   32658 retry.go:31] will retry after 3.378845367s: waiting for machine to come up
	I0612 20:27:57.796199   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:57.796596   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:57.796614   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:57.796552   32658 retry.go:31] will retry after 3.490939997s: waiting for machine to come up
	I0612 20:28:01.289120   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.289570   32635 main.go:141] libmachine: (ha-844626) Found IP for machine: 192.168.39.196
	I0612 20:28:01.289590   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has current primary IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.289597   32635 main.go:141] libmachine: (ha-844626) Reserving static IP address...
	I0612 20:28:01.289895   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find host DHCP lease matching {name: "ha-844626", mac: "52:54:00:8a:2d:9f", ip: "192.168.39.196"} in network mk-ha-844626
	I0612 20:28:01.363725   32635 main.go:141] libmachine: (ha-844626) Reserved static IP address: 192.168.39.196
	I0612 20:28:01.363754   32635 main.go:141] libmachine: (ha-844626) Waiting for SSH to be available...
	I0612 20:28:01.363764   32635 main.go:141] libmachine: (ha-844626) DBG | Getting to WaitForSSH function...
	I0612 20:28:01.366151   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.366560   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.366586   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.366707   32635 main.go:141] libmachine: (ha-844626) DBG | Using SSH client type: external
	I0612 20:28:01.366738   32635 main.go:141] libmachine: (ha-844626) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa (-rw-------)
	I0612 20:28:01.366782   32635 main.go:141] libmachine: (ha-844626) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 20:28:01.366797   32635 main.go:141] libmachine: (ha-844626) DBG | About to run SSH command:
	I0612 20:28:01.366810   32635 main.go:141] libmachine: (ha-844626) DBG | exit 0
	I0612 20:28:01.487570   32635 main.go:141] libmachine: (ha-844626) DBG | SSH cmd err, output: <nil>: 
	I0612 20:28:01.488003   32635 main.go:141] libmachine: (ha-844626) KVM machine creation complete!
	I0612 20:28:01.488243   32635 main.go:141] libmachine: (ha-844626) Calling .GetConfigRaw
	I0612 20:28:01.488719   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:01.488938   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:01.489119   32635 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 20:28:01.489134   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:28:01.490477   32635 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 20:28:01.490491   32635 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 20:28:01.490505   32635 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 20:28:01.490513   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:01.492740   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.493113   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.493143   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.493229   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:01.493420   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.493576   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.493724   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:01.493883   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:01.494178   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:01.494192   32635 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 20:28:01.594480   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:28:01.594515   32635 main.go:141] libmachine: Detecting the provisioner...
	I0612 20:28:01.594528   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:01.597525   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.597995   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.598018   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.598329   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:01.598531   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.598672   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.598810   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:01.598980   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:01.599236   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:01.599251   32635 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 20:28:01.699939   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 20:28:01.699997   32635 main.go:141] libmachine: found compatible host: buildroot
	I0612 20:28:01.700003   32635 main.go:141] libmachine: Provisioning with buildroot...
	I0612 20:28:01.700010   32635 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:28:01.700296   32635 buildroot.go:166] provisioning hostname "ha-844626"
	I0612 20:28:01.700319   32635 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:28:01.700529   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:01.703527   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.703955   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.703976   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.704077   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:01.704253   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.704415   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.704590   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:01.704785   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:01.704994   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:01.705008   32635 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844626 && echo "ha-844626" | sudo tee /etc/hostname
	I0612 20:28:01.822281   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844626
	
	I0612 20:28:01.822307   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:01.824810   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.825195   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.825228   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.825425   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:01.825594   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.825737   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.825833   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:01.825956   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:01.826125   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:01.826140   32635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844626' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844626/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844626' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 20:28:01.940075   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:28:01.940110   32635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 20:28:01.940139   32635 buildroot.go:174] setting up certificates
	I0612 20:28:01.940149   32635 provision.go:84] configureAuth start
	I0612 20:28:01.940158   32635 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:28:01.940481   32635 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:28:01.942968   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.943378   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.943405   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.943664   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:01.945708   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.946013   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.946031   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.946158   32635 provision.go:143] copyHostCerts
	I0612 20:28:01.946190   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:28:01.946237   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 20:28:01.946248   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:28:01.946320   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 20:28:01.946411   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:28:01.946445   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 20:28:01.946455   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:28:01.946493   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 20:28:01.946550   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:28:01.946573   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 20:28:01.946582   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:28:01.946614   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 20:28:01.946703   32635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.ha-844626 san=[127.0.0.1 192.168.39.196 ha-844626 localhost minikube]
	I0612 20:28:02.042742   32635 provision.go:177] copyRemoteCerts
	I0612 20:28:02.042800   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 20:28:02.042836   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.045415   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.045688   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.045731   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.045876   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.046057   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.046259   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.046382   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:02.126575   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0612 20:28:02.126659   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 20:28:02.152327   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0612 20:28:02.152398   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0612 20:28:02.176724   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0612 20:28:02.176783   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 20:28:02.200633   32635 provision.go:87] duration metric: took 260.470177ms to configureAuth
	I0612 20:28:02.200661   32635 buildroot.go:189] setting minikube options for container-runtime
	I0612 20:28:02.200875   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:28:02.200961   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.203680   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.204089   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.204118   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.204320   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.204515   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.204662   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.204801   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.205002   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:02.205171   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:02.205189   32635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 20:28:02.479832   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 20:28:02.479862   32635 main.go:141] libmachine: Checking connection to Docker...
	I0612 20:28:02.479885   32635 main.go:141] libmachine: (ha-844626) Calling .GetURL
	I0612 20:28:02.481131   32635 main.go:141] libmachine: (ha-844626) DBG | Using libvirt version 6000000
	I0612 20:28:02.483017   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.483369   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.483396   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.483571   32635 main.go:141] libmachine: Docker is up and running!
	I0612 20:28:02.483582   32635 main.go:141] libmachine: Reticulating splines...
	I0612 20:28:02.483588   32635 client.go:171] duration metric: took 21.422699477s to LocalClient.Create
	I0612 20:28:02.483608   32635 start.go:167] duration metric: took 21.422756924s to libmachine.API.Create "ha-844626"
	I0612 20:28:02.483616   32635 start.go:293] postStartSetup for "ha-844626" (driver="kvm2")
	I0612 20:28:02.483625   32635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 20:28:02.483639   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:02.483845   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 20:28:02.483877   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.486014   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.486321   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.486347   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.486478   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.486668   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.486800   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.486911   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:02.566307   32635 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 20:28:02.570568   32635 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 20:28:02.570595   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 20:28:02.570652   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 20:28:02.570753   32635 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 20:28:02.570768   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /etc/ssl/certs/214442.pem
	I0612 20:28:02.570903   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 20:28:02.580443   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:28:02.604400   32635 start.go:296] duration metric: took 120.770015ms for postStartSetup
	I0612 20:28:02.604443   32635 main.go:141] libmachine: (ha-844626) Calling .GetConfigRaw
	I0612 20:28:02.605074   32635 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:28:02.607655   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.607980   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.607998   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.608305   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:28:02.608477   32635 start.go:128] duration metric: took 21.565089753s to createHost
	I0612 20:28:02.608498   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.610703   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.611051   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.611069   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.611320   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.611512   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.611685   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.611821   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.611963   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:02.612195   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:02.612209   32635 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 20:28:02.716099   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224082.689142441
	
	I0612 20:28:02.716119   32635 fix.go:216] guest clock: 1718224082.689142441
	I0612 20:28:02.716126   32635 fix.go:229] Guest: 2024-06-12 20:28:02.689142441 +0000 UTC Remote: 2024-06-12 20:28:02.608489141 +0000 UTC m=+21.668937559 (delta=80.6533ms)
	I0612 20:28:02.716144   32635 fix.go:200] guest clock delta is within tolerance: 80.6533ms
	I0612 20:28:02.716149   32635 start.go:83] releasing machines lock for "ha-844626", held for 21.672839067s
	I0612 20:28:02.716166   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:02.716425   32635 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:28:02.719033   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.719441   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.719476   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.719585   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:02.720108   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:02.720308   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:02.720404   32635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 20:28:02.720458   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.720483   32635 ssh_runner.go:195] Run: cat /version.json
	I0612 20:28:02.720502   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.723003   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.723040   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.723416   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.723444   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.723475   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.723489   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.723568   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.723716   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.723727   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.723848   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.723920   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.723986   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:02.724041   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.724173   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:02.832342   32635 ssh_runner.go:195] Run: systemctl --version
	I0612 20:28:02.839025   32635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 20:28:03.005734   32635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 20:28:03.012212   32635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 20:28:03.012286   32635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 20:28:03.029159   32635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 20:28:03.029183   32635 start.go:494] detecting cgroup driver to use...
	I0612 20:28:03.029233   32635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 20:28:03.045339   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 20:28:03.059281   32635 docker.go:217] disabling cri-docker service (if available) ...
	I0612 20:28:03.059353   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 20:28:03.073629   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 20:28:03.087326   32635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 20:28:03.207418   32635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 20:28:03.358651   32635 docker.go:233] disabling docker service ...
	I0612 20:28:03.358723   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 20:28:03.373844   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 20:28:03.387977   32635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 20:28:03.525343   32635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 20:28:03.650448   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 20:28:03.665958   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 20:28:03.685409   32635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 20:28:03.685472   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.696471   32635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 20:28:03.696527   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.707401   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.717547   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.728800   32635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 20:28:03.740092   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.751133   32635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.768505   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.779732   32635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 20:28:03.790037   32635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 20:28:03.790104   32635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 20:28:03.804622   32635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 20:28:03.815507   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:28:03.932284   32635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 20:28:04.074714   32635 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 20:28:04.074788   32635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 20:28:04.079956   32635 start.go:562] Will wait 60s for crictl version
	I0612 20:28:04.080013   32635 ssh_runner.go:195] Run: which crictl
	I0612 20:28:04.083863   32635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 20:28:04.123823   32635 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 20:28:04.123927   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:28:04.152702   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:28:04.183406   32635 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 20:28:04.184860   32635 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:28:04.187810   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:04.188255   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:04.188290   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:04.188431   32635 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 20:28:04.192780   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:28:04.205768   32635 kubeadm.go:877] updating cluster {Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 20:28:04.205874   32635 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:28:04.205915   32635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:28:04.239424   32635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 20:28:04.239487   32635 ssh_runner.go:195] Run: which lz4
	I0612 20:28:04.243748   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0612 20:28:04.243871   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 20:28:04.248527   32635 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 20:28:04.248562   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 20:28:05.683994   32635 crio.go:462] duration metric: took 1.440168489s to copy over tarball
	I0612 20:28:05.684069   32635 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 20:28:07.793927   32635 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.109826987s)
	I0612 20:28:07.793959   32635 crio.go:469] duration metric: took 2.109938484s to extract the tarball
	I0612 20:28:07.793966   32635 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 20:28:07.833160   32635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:28:07.876721   32635 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 20:28:07.876749   32635 cache_images.go:84] Images are preloaded, skipping loading
	I0612 20:28:07.876758   32635 kubeadm.go:928] updating node { 192.168.39.196 8443 v1.30.1 crio true true} ...
	I0612 20:28:07.876885   32635 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844626 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 20:28:07.876969   32635 ssh_runner.go:195] Run: crio config
	I0612 20:28:07.926529   32635 cni.go:84] Creating CNI manager for ""
	I0612 20:28:07.926553   32635 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0612 20:28:07.926562   32635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 20:28:07.926587   32635 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-844626 NodeName:ha-844626 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 20:28:07.926722   32635 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-844626"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 20:28:07.926746   32635 kube-vip.go:115] generating kube-vip config ...
	I0612 20:28:07.926784   32635 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 20:28:07.943966   32635 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 20:28:07.944088   32635 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0612 20:28:07.944165   32635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 20:28:07.954861   32635 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 20:28:07.954939   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0612 20:28:07.964651   32635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0612 20:28:07.981438   32635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 20:28:07.997818   32635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0612 20:28:08.014061   32635 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0612 20:28:08.030286   32635 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0612 20:28:08.034165   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:28:08.046294   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:28:08.166144   32635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:28:08.184592   32635 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626 for IP: 192.168.39.196
	I0612 20:28:08.184616   32635 certs.go:194] generating shared ca certs ...
	I0612 20:28:08.184636   32635 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.184825   32635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 20:28:08.184876   32635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 20:28:08.184890   32635 certs.go:256] generating profile certs ...
	I0612 20:28:08.184953   32635 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key
	I0612 20:28:08.184971   32635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt with IP's: []
	I0612 20:28:08.252302   32635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt ...
	I0612 20:28:08.252333   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt: {Name:mkd4f9765dc2fdba49dd784d22bb60440d0a8c32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.252486   32635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key ...
	I0612 20:28:08.252497   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key: {Name:mk886b18d2e24f1c9aa1cd0d466e4744a6eefbc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.252569   32635 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.23a7e20a
	I0612 20:28:08.252583   32635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.23a7e20a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.254]
	I0612 20:28:08.355250   32635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.23a7e20a ...
	I0612 20:28:08.355273   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.23a7e20a: {Name:mkfdab9b803a4796bf933c99aedbe3d7f2c9d42d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.355419   32635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.23a7e20a ...
	I0612 20:28:08.355432   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.23a7e20a: {Name:mk231338a689f18482141f43a8c21a67e5049b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.355500   32635 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.23a7e20a -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt
	I0612 20:28:08.355581   32635 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.23a7e20a -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key
	I0612 20:28:08.355633   32635 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key
	I0612 20:28:08.355648   32635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt with IP's: []
	I0612 20:28:08.441754   32635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt ...
	I0612 20:28:08.441779   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt: {Name:mka449a40f128c0d8f283fbeb7606c82b8efeb35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.441911   32635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key ...
	I0612 20:28:08.441920   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key: {Name:mk89014c94a5f0f3d7cb3f60cd2c9fd7d27fbf9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.441983   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 20:28:08.441999   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0612 20:28:08.442009   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 20:28:08.442019   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 20:28:08.442031   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 20:28:08.442041   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 20:28:08.442051   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 20:28:08.442060   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 20:28:08.442103   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 20:28:08.442135   32635 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 20:28:08.442145   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 20:28:08.442165   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 20:28:08.442185   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 20:28:08.442206   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 20:28:08.442240   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:28:08.442276   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:28:08.442303   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem -> /usr/share/ca-certificates/21444.pem
	I0612 20:28:08.442316   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /usr/share/ca-certificates/214442.pem
	I0612 20:28:08.442785   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 20:28:08.468953   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 20:28:08.492177   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 20:28:08.515233   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 20:28:08.538548   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 20:28:08.561750   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 20:28:08.585113   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 20:28:08.609569   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 20:28:08.634385   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 20:28:08.658368   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 20:28:08.681497   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 20:28:08.712593   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 20:28:08.729762   32635 ssh_runner.go:195] Run: openssl version
	I0612 20:28:08.735819   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 20:28:08.746382   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:28:08.751012   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:28:08.751061   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:28:08.756796   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 20:28:08.767201   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 20:28:08.778150   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 20:28:08.782749   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 20:28:08.782796   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 20:28:08.788403   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 20:28:08.798461   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 20:28:08.809290   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 20:28:08.814009   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 20:28:08.814078   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 20:28:08.819834   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 20:28:08.831221   32635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 20:28:08.835779   32635 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 20:28:08.835840   32635 kubeadm.go:391] StartCluster: {Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:28:08.835929   32635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 20:28:08.835978   32635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 20:28:08.874542   32635 cri.go:89] found id: ""
	I0612 20:28:08.874608   32635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 20:28:08.884683   32635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 20:28:08.894058   32635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 20:28:08.903306   32635 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 20:28:08.903321   32635 kubeadm.go:156] found existing configuration files:
	
	I0612 20:28:08.903354   32635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 20:28:08.912378   32635 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 20:28:08.912422   32635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 20:28:08.921517   32635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 20:28:08.930469   32635 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 20:28:08.930522   32635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 20:28:08.939976   32635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 20:28:08.948907   32635 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 20:28:08.948953   32635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 20:28:08.961019   32635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 20:28:08.970075   32635 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 20:28:08.970133   32635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 20:28:08.980621   32635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 20:28:09.229616   32635 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 20:28:20.211010   32635 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 20:28:20.211085   32635 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 20:28:20.211184   32635 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 20:28:20.211342   32635 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 20:28:20.211478   32635 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 20:28:20.211560   32635 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 20:28:20.213439   32635 out.go:204]   - Generating certificates and keys ...
	I0612 20:28:20.213516   32635 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 20:28:20.213584   32635 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 20:28:20.213668   32635 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 20:28:20.213742   32635 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 20:28:20.213793   32635 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 20:28:20.213836   32635 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 20:28:20.213915   32635 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 20:28:20.214081   32635 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-844626 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0612 20:28:20.214154   32635 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 20:28:20.214311   32635 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-844626 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0612 20:28:20.214374   32635 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 20:28:20.214428   32635 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 20:28:20.214466   32635 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 20:28:20.214522   32635 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 20:28:20.214565   32635 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 20:28:20.214638   32635 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 20:28:20.214733   32635 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 20:28:20.214838   32635 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 20:28:20.214958   32635 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 20:28:20.215107   32635 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 20:28:20.215223   32635 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 20:28:20.216927   32635 out.go:204]   - Booting up control plane ...
	I0612 20:28:20.217047   32635 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 20:28:20.217163   32635 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 20:28:20.217226   32635 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 20:28:20.217322   32635 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 20:28:20.217403   32635 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 20:28:20.217462   32635 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 20:28:20.217645   32635 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 20:28:20.217739   32635 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 20:28:20.217818   32635 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.030994ms
	I0612 20:28:20.217923   32635 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 20:28:20.217978   32635 kubeadm.go:309] [api-check] The API server is healthy after 6.055837616s
	I0612 20:28:20.218073   32635 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 20:28:20.218200   32635 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 20:28:20.218272   32635 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 20:28:20.218434   32635 kubeadm.go:309] [mark-control-plane] Marking the node ha-844626 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 20:28:20.218519   32635 kubeadm.go:309] [bootstrap-token] Using token: rq2m6h.oorxndmx2szfgjlt
	I0612 20:28:20.219971   32635 out.go:204]   - Configuring RBAC rules ...
	I0612 20:28:20.220078   32635 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 20:28:20.220163   32635 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 20:28:20.220336   32635 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 20:28:20.220457   32635 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 20:28:20.220559   32635 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 20:28:20.220635   32635 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 20:28:20.220728   32635 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 20:28:20.220771   32635 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 20:28:20.220810   32635 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 20:28:20.220816   32635 kubeadm.go:309] 
	I0612 20:28:20.220904   32635 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 20:28:20.220928   32635 kubeadm.go:309] 
	I0612 20:28:20.221027   32635 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 20:28:20.221043   32635 kubeadm.go:309] 
	I0612 20:28:20.221091   32635 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 20:28:20.221168   32635 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 20:28:20.221244   32635 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 20:28:20.221257   32635 kubeadm.go:309] 
	I0612 20:28:20.221345   32635 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 20:28:20.221352   32635 kubeadm.go:309] 
	I0612 20:28:20.221408   32635 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 20:28:20.221417   32635 kubeadm.go:309] 
	I0612 20:28:20.221481   32635 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 20:28:20.221585   32635 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 20:28:20.221682   32635 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 20:28:20.221692   32635 kubeadm.go:309] 
	I0612 20:28:20.221804   32635 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 20:28:20.221888   32635 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 20:28:20.221908   32635 kubeadm.go:309] 
	I0612 20:28:20.221980   32635 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token rq2m6h.oorxndmx2szfgjlt \
	I0612 20:28:20.222077   32635 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 20:28:20.222099   32635 kubeadm.go:309] 	--control-plane 
	I0612 20:28:20.222103   32635 kubeadm.go:309] 
	I0612 20:28:20.222174   32635 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 20:28:20.222181   32635 kubeadm.go:309] 
	I0612 20:28:20.222257   32635 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token rq2m6h.oorxndmx2szfgjlt \
	I0612 20:28:20.222358   32635 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 20:28:20.222370   32635 cni.go:84] Creating CNI manager for ""
	I0612 20:28:20.222376   32635 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0612 20:28:20.224675   32635 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0612 20:28:20.226046   32635 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0612 20:28:20.231610   32635 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0612 20:28:20.231627   32635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0612 20:28:20.252220   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0612 20:28:20.618054   32635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 20:28:20.618128   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:20.618143   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844626 minikube.k8s.io/updated_at=2024_06_12T20_28_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=ha-844626 minikube.k8s.io/primary=true
	I0612 20:28:20.629957   32635 ops.go:34] apiserver oom_adj: -16
	I0612 20:28:20.720524   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:21.221483   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:21.720581   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:22.220693   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:22.721548   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:23.220607   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:23.720812   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:24.221346   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:24.720812   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:25.220680   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:25.721545   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:26.221495   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:26.721299   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:27.221251   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:27.721536   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:28.221381   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:28.720581   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:29.220635   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:29.721034   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:30.221414   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:30.721542   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:31.221065   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:31.720883   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:32.221359   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:32.720864   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:33.220798   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:33.330554   32635 kubeadm.go:1107] duration metric: took 12.712482777s to wait for elevateKubeSystemPrivileges
	W0612 20:28:33.330595   32635 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 20:28:33.330603   32635 kubeadm.go:393] duration metric: took 24.494765813s to StartCluster
	I0612 20:28:33.330619   32635 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:33.330684   32635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:28:33.331674   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:33.331871   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0612 20:28:33.331883   32635 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:28:33.331908   32635 start.go:240] waiting for startup goroutines ...
	I0612 20:28:33.331924   32635 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 20:28:33.331983   32635 addons.go:69] Setting storage-provisioner=true in profile "ha-844626"
	I0612 20:28:33.332006   32635 addons.go:69] Setting default-storageclass=true in profile "ha-844626"
	I0612 20:28:33.332014   32635 addons.go:234] Setting addon storage-provisioner=true in "ha-844626"
	I0612 20:28:33.332031   32635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-844626"
	I0612 20:28:33.332042   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:28:33.332086   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:28:33.332492   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:33.332492   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:33.332521   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:33.332546   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:33.347640   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46079
	I0612 20:28:33.347640   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45557
	I0612 20:28:33.348060   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:33.348074   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:33.348521   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:33.348537   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:33.348640   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:33.348662   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:33.348870   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:33.348931   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:33.349042   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:28:33.349473   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:33.349499   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:33.351192   32635 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:28:33.351535   32635 kapi.go:59] client config for ha-844626: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt", KeyFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key", CAFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfb000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 20:28:33.352103   32635 cert_rotation.go:137] Starting client certificate rotation controller
	I0612 20:28:33.352283   32635 addons.go:234] Setting addon default-storageclass=true in "ha-844626"
	I0612 20:28:33.352326   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:28:33.352678   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:33.352725   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:33.364666   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0612 20:28:33.365160   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:33.365649   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:33.365676   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:33.367379   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0612 20:28:33.367381   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:33.367675   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:28:33.367862   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:33.368302   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:33.368317   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:33.368651   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:33.369084   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:33.369115   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:33.369387   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:33.371474   32635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 20:28:33.372996   32635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 20:28:33.373014   32635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 20:28:33.373029   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:33.376276   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:33.376742   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:33.376767   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:33.376903   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:33.377075   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:33.377211   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:33.377338   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:33.383810   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0612 20:28:33.384112   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:33.384546   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:33.384571   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:33.384847   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:33.385008   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:28:33.386361   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:33.386579   32635 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 20:28:33.386593   32635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 20:28:33.386605   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:33.389543   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:33.389965   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:33.390000   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:33.390136   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:33.390252   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:33.390353   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:33.390447   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:33.480944   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0612 20:28:33.589149   32635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 20:28:33.591711   32635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 20:28:33.952850   32635 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0612 20:28:33.952917   32635 main.go:141] libmachine: Making call to close driver server
	I0612 20:28:33.952937   32635 main.go:141] libmachine: (ha-844626) Calling .Close
	I0612 20:28:33.953228   32635 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:28:33.953244   32635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:28:33.953248   32635 main.go:141] libmachine: (ha-844626) DBG | Closing plugin on server side
	I0612 20:28:33.953255   32635 main.go:141] libmachine: Making call to close driver server
	I0612 20:28:33.953266   32635 main.go:141] libmachine: (ha-844626) Calling .Close
	I0612 20:28:33.953539   32635 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:28:33.953543   32635 main.go:141] libmachine: (ha-844626) DBG | Closing plugin on server side
	I0612 20:28:33.953554   32635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:28:33.953679   32635 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0612 20:28:33.953691   32635 round_trippers.go:469] Request Headers:
	I0612 20:28:33.953702   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:28:33.953710   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:28:33.965113   32635 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0612 20:28:33.965633   32635 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0612 20:28:33.965647   32635 round_trippers.go:469] Request Headers:
	I0612 20:28:33.965654   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:28:33.965659   32635 round_trippers.go:473]     Content-Type: application/json
	I0612 20:28:33.965664   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:28:33.969161   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:28:33.969312   32635 main.go:141] libmachine: Making call to close driver server
	I0612 20:28:33.969329   32635 main.go:141] libmachine: (ha-844626) Calling .Close
	I0612 20:28:33.969586   32635 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:28:33.969598   32635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:28:34.294070   32635 main.go:141] libmachine: Making call to close driver server
	I0612 20:28:34.294209   32635 main.go:141] libmachine: (ha-844626) Calling .Close
	I0612 20:28:34.294502   32635 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:28:34.294528   32635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:28:34.294537   32635 main.go:141] libmachine: Making call to close driver server
	I0612 20:28:34.294546   32635 main.go:141] libmachine: (ha-844626) Calling .Close
	I0612 20:28:34.294550   32635 main.go:141] libmachine: (ha-844626) DBG | Closing plugin on server side
	I0612 20:28:34.294766   32635 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:28:34.294781   32635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:28:34.294806   32635 main.go:141] libmachine: (ha-844626) DBG | Closing plugin on server side
	I0612 20:28:34.296660   32635 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0612 20:28:34.298128   32635 addons.go:510] duration metric: took 966.190513ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0612 20:28:34.298168   32635 start.go:245] waiting for cluster config update ...
	I0612 20:28:34.298185   32635 start.go:254] writing updated cluster config ...
	I0612 20:28:34.300255   32635 out.go:177] 
	I0612 20:28:34.301433   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:28:34.301528   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:28:34.303062   32635 out.go:177] * Starting "ha-844626-m02" control-plane node in "ha-844626" cluster
	I0612 20:28:34.304493   32635 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:28:34.304525   32635 cache.go:56] Caching tarball of preloaded images
	I0612 20:28:34.304617   32635 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 20:28:34.304633   32635 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 20:28:34.304728   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:28:34.304946   32635 start.go:360] acquireMachinesLock for ha-844626-m02: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 20:28:34.305008   32635 start.go:364] duration metric: took 37.579µs to acquireMachinesLock for "ha-844626-m02"
	I0612 20:28:34.305033   32635 start.go:93] Provisioning new machine with config: &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:28:34.305134   32635 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0612 20:28:34.306787   32635 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 20:28:34.306883   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:34.306916   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:34.321078   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36187
	I0612 20:28:34.321498   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:34.321960   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:34.321979   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:34.322280   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:34.322483   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetMachineName
	I0612 20:28:34.322632   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:34.322781   32635 start.go:159] libmachine.API.Create for "ha-844626" (driver="kvm2")
	I0612 20:28:34.322805   32635 client.go:168] LocalClient.Create starting
	I0612 20:28:34.322838   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 20:28:34.322876   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:28:34.322896   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:28:34.322960   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 20:28:34.322988   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:28:34.323010   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:28:34.323038   32635 main.go:141] libmachine: Running pre-create checks...
	I0612 20:28:34.323051   32635 main.go:141] libmachine: (ha-844626-m02) Calling .PreCreateCheck
	I0612 20:28:34.323216   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetConfigRaw
	I0612 20:28:34.323562   32635 main.go:141] libmachine: Creating machine...
	I0612 20:28:34.323577   32635 main.go:141] libmachine: (ha-844626-m02) Calling .Create
	I0612 20:28:34.323694   32635 main.go:141] libmachine: (ha-844626-m02) Creating KVM machine...
	I0612 20:28:34.324707   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found existing default KVM network
	I0612 20:28:34.324850   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found existing private KVM network mk-ha-844626
	I0612 20:28:34.324957   32635 main.go:141] libmachine: (ha-844626-m02) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02 ...
	I0612 20:28:34.324977   32635 main.go:141] libmachine: (ha-844626-m02) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 20:28:34.325043   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:34.324957   33039 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:28:34.325237   32635 main.go:141] libmachine: (ha-844626-m02) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 20:28:34.563768   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:34.563644   33039 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa...
	I0612 20:28:34.685880   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:34.685763   33039 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/ha-844626-m02.rawdisk...
	I0612 20:28:34.685922   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Writing magic tar header
	I0612 20:28:34.685932   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Writing SSH key tar header
	I0612 20:28:34.685944   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:34.685889   33039 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02 ...
	I0612 20:28:34.686027   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02
	I0612 20:28:34.686056   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 20:28:34.686069   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02 (perms=drwx------)
	I0612 20:28:34.686081   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 20:28:34.686092   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 20:28:34.686104   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:28:34.686120   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 20:28:34.686136   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 20:28:34.686147   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 20:28:34.686159   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 20:28:34.686164   32635 main.go:141] libmachine: (ha-844626-m02) Creating domain...
	I0612 20:28:34.686171   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 20:28:34.686178   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins
	I0612 20:28:34.686183   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home
	I0612 20:28:34.686193   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Skipping /home - not owner
	I0612 20:28:34.687131   32635 main.go:141] libmachine: (ha-844626-m02) define libvirt domain using xml: 
	I0612 20:28:34.687156   32635 main.go:141] libmachine: (ha-844626-m02) <domain type='kvm'>
	I0612 20:28:34.687163   32635 main.go:141] libmachine: (ha-844626-m02)   <name>ha-844626-m02</name>
	I0612 20:28:34.687179   32635 main.go:141] libmachine: (ha-844626-m02)   <memory unit='MiB'>2200</memory>
	I0612 20:28:34.687188   32635 main.go:141] libmachine: (ha-844626-m02)   <vcpu>2</vcpu>
	I0612 20:28:34.687195   32635 main.go:141] libmachine: (ha-844626-m02)   <features>
	I0612 20:28:34.687225   32635 main.go:141] libmachine: (ha-844626-m02)     <acpi/>
	I0612 20:28:34.687246   32635 main.go:141] libmachine: (ha-844626-m02)     <apic/>
	I0612 20:28:34.687257   32635 main.go:141] libmachine: (ha-844626-m02)     <pae/>
	I0612 20:28:34.687271   32635 main.go:141] libmachine: (ha-844626-m02)     
	I0612 20:28:34.687284   32635 main.go:141] libmachine: (ha-844626-m02)   </features>
	I0612 20:28:34.687297   32635 main.go:141] libmachine: (ha-844626-m02)   <cpu mode='host-passthrough'>
	I0612 20:28:34.687313   32635 main.go:141] libmachine: (ha-844626-m02)   
	I0612 20:28:34.687323   32635 main.go:141] libmachine: (ha-844626-m02)   </cpu>
	I0612 20:28:34.687334   32635 main.go:141] libmachine: (ha-844626-m02)   <os>
	I0612 20:28:34.687351   32635 main.go:141] libmachine: (ha-844626-m02)     <type>hvm</type>
	I0612 20:28:34.687374   32635 main.go:141] libmachine: (ha-844626-m02)     <boot dev='cdrom'/>
	I0612 20:28:34.687395   32635 main.go:141] libmachine: (ha-844626-m02)     <boot dev='hd'/>
	I0612 20:28:34.687411   32635 main.go:141] libmachine: (ha-844626-m02)     <bootmenu enable='no'/>
	I0612 20:28:34.687427   32635 main.go:141] libmachine: (ha-844626-m02)   </os>
	I0612 20:28:34.687451   32635 main.go:141] libmachine: (ha-844626-m02)   <devices>
	I0612 20:28:34.687462   32635 main.go:141] libmachine: (ha-844626-m02)     <disk type='file' device='cdrom'>
	I0612 20:28:34.687475   32635 main.go:141] libmachine: (ha-844626-m02)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/boot2docker.iso'/>
	I0612 20:28:34.687483   32635 main.go:141] libmachine: (ha-844626-m02)       <target dev='hdc' bus='scsi'/>
	I0612 20:28:34.687488   32635 main.go:141] libmachine: (ha-844626-m02)       <readonly/>
	I0612 20:28:34.687495   32635 main.go:141] libmachine: (ha-844626-m02)     </disk>
	I0612 20:28:34.687502   32635 main.go:141] libmachine: (ha-844626-m02)     <disk type='file' device='disk'>
	I0612 20:28:34.687511   32635 main.go:141] libmachine: (ha-844626-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 20:28:34.687520   32635 main.go:141] libmachine: (ha-844626-m02)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/ha-844626-m02.rawdisk'/>
	I0612 20:28:34.687528   32635 main.go:141] libmachine: (ha-844626-m02)       <target dev='hda' bus='virtio'/>
	I0612 20:28:34.687534   32635 main.go:141] libmachine: (ha-844626-m02)     </disk>
	I0612 20:28:34.687541   32635 main.go:141] libmachine: (ha-844626-m02)     <interface type='network'>
	I0612 20:28:34.687548   32635 main.go:141] libmachine: (ha-844626-m02)       <source network='mk-ha-844626'/>
	I0612 20:28:34.687554   32635 main.go:141] libmachine: (ha-844626-m02)       <model type='virtio'/>
	I0612 20:28:34.687560   32635 main.go:141] libmachine: (ha-844626-m02)     </interface>
	I0612 20:28:34.687567   32635 main.go:141] libmachine: (ha-844626-m02)     <interface type='network'>
	I0612 20:28:34.687573   32635 main.go:141] libmachine: (ha-844626-m02)       <source network='default'/>
	I0612 20:28:34.687580   32635 main.go:141] libmachine: (ha-844626-m02)       <model type='virtio'/>
	I0612 20:28:34.687585   32635 main.go:141] libmachine: (ha-844626-m02)     </interface>
	I0612 20:28:34.687591   32635 main.go:141] libmachine: (ha-844626-m02)     <serial type='pty'>
	I0612 20:28:34.687597   32635 main.go:141] libmachine: (ha-844626-m02)       <target port='0'/>
	I0612 20:28:34.687604   32635 main.go:141] libmachine: (ha-844626-m02)     </serial>
	I0612 20:28:34.687611   32635 main.go:141] libmachine: (ha-844626-m02)     <console type='pty'>
	I0612 20:28:34.687618   32635 main.go:141] libmachine: (ha-844626-m02)       <target type='serial' port='0'/>
	I0612 20:28:34.687623   32635 main.go:141] libmachine: (ha-844626-m02)     </console>
	I0612 20:28:34.687629   32635 main.go:141] libmachine: (ha-844626-m02)     <rng model='virtio'>
	I0612 20:28:34.687646   32635 main.go:141] libmachine: (ha-844626-m02)       <backend model='random'>/dev/random</backend>
	I0612 20:28:34.687662   32635 main.go:141] libmachine: (ha-844626-m02)     </rng>
	I0612 20:28:34.687674   32635 main.go:141] libmachine: (ha-844626-m02)     
	I0612 20:28:34.687684   32635 main.go:141] libmachine: (ha-844626-m02)     
	I0612 20:28:34.687692   32635 main.go:141] libmachine: (ha-844626-m02)   </devices>
	I0612 20:28:34.687702   32635 main.go:141] libmachine: (ha-844626-m02) </domain>
	I0612 20:28:34.687715   32635 main.go:141] libmachine: (ha-844626-m02) 
	I0612 20:28:34.694563   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:e6:9f:42 in network default
	I0612 20:28:34.695320   32635 main.go:141] libmachine: (ha-844626-m02) Ensuring networks are active...
	I0612 20:28:34.695340   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:34.695978   32635 main.go:141] libmachine: (ha-844626-m02) Ensuring network default is active
	I0612 20:28:34.696283   32635 main.go:141] libmachine: (ha-844626-m02) Ensuring network mk-ha-844626 is active
	I0612 20:28:34.696687   32635 main.go:141] libmachine: (ha-844626-m02) Getting domain xml...
	I0612 20:28:34.697350   32635 main.go:141] libmachine: (ha-844626-m02) Creating domain...
	I0612 20:28:35.897652   32635 main.go:141] libmachine: (ha-844626-m02) Waiting to get IP...
	I0612 20:28:35.898547   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:35.898977   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:35.899019   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:35.898970   33039 retry.go:31] will retry after 188.812483ms: waiting for machine to come up
	I0612 20:28:36.089381   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:36.089937   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:36.089970   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:36.089883   33039 retry.go:31] will retry after 248.337423ms: waiting for machine to come up
	I0612 20:28:36.339460   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:36.339915   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:36.339981   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:36.339859   33039 retry.go:31] will retry after 483.208215ms: waiting for machine to come up
	I0612 20:28:36.824482   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:36.825125   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:36.825153   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:36.825074   33039 retry.go:31] will retry after 448.029523ms: waiting for machine to come up
	I0612 20:28:37.274773   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:37.275250   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:37.275275   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:37.275225   33039 retry.go:31] will retry after 689.330075ms: waiting for machine to come up
	I0612 20:28:37.966768   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:37.967833   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:37.967867   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:37.967781   33039 retry.go:31] will retry after 820.730369ms: waiting for machine to come up
	I0612 20:28:38.789810   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:38.790276   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:38.790302   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:38.790220   33039 retry.go:31] will retry after 806.096624ms: waiting for machine to come up
	I0612 20:28:39.597586   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:39.598130   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:39.598156   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:39.598100   33039 retry.go:31] will retry after 971.914744ms: waiting for machine to come up
	I0612 20:28:40.571299   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:40.571718   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:40.571747   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:40.571677   33039 retry.go:31] will retry after 1.557937808s: waiting for machine to come up
	I0612 20:28:42.131638   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:42.132079   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:42.132105   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:42.132033   33039 retry.go:31] will retry after 1.545550008s: waiting for machine to come up
	I0612 20:28:43.679913   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:43.680458   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:43.680486   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:43.680399   33039 retry.go:31] will retry after 2.155457776s: waiting for machine to come up
	I0612 20:28:45.838147   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:45.838800   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:45.838837   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:45.838740   33039 retry.go:31] will retry after 2.378044585s: waiting for machine to come up
	I0612 20:28:48.220330   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:48.220887   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:48.220914   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:48.220850   33039 retry.go:31] will retry after 3.582059005s: waiting for machine to come up
	I0612 20:28:51.804217   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:51.804650   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:51.804681   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:51.804596   33039 retry.go:31] will retry after 5.387350068s: waiting for machine to come up
	I0612 20:28:57.195392   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.195961   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has current primary IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.195989   32635 main.go:141] libmachine: (ha-844626-m02) Found IP for machine: 192.168.39.108
	I0612 20:28:57.196002   32635 main.go:141] libmachine: (ha-844626-m02) Reserving static IP address...
	I0612 20:28:57.196424   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find host DHCP lease matching {name: "ha-844626-m02", mac: "52:54:00:01:79:34", ip: "192.168.39.108"} in network mk-ha-844626
	I0612 20:28:57.265323   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Getting to WaitForSSH function...
	I0612 20:28:57.265349   32635 main.go:141] libmachine: (ha-844626-m02) Reserved static IP address: 192.168.39.108
	I0612 20:28:57.265369   32635 main.go:141] libmachine: (ha-844626-m02) Waiting for SSH to be available...
	I0612 20:28:57.267928   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.268262   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:minikube Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.268292   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.268445   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Using SSH client type: external
	I0612 20:28:57.268472   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa (-rw-------)
	I0612 20:28:57.268505   32635 main.go:141] libmachine: (ha-844626-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 20:28:57.268524   32635 main.go:141] libmachine: (ha-844626-m02) DBG | About to run SSH command:
	I0612 20:28:57.268538   32635 main.go:141] libmachine: (ha-844626-m02) DBG | exit 0
	I0612 20:28:57.395110   32635 main.go:141] libmachine: (ha-844626-m02) DBG | SSH cmd err, output: <nil>: 
	I0612 20:28:57.395444   32635 main.go:141] libmachine: (ha-844626-m02) KVM machine creation complete!
	I0612 20:28:57.395735   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetConfigRaw
	I0612 20:28:57.396315   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:57.396506   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:57.396679   32635 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 20:28:57.396694   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:28:57.397968   32635 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 20:28:57.397983   32635 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 20:28:57.397991   32635 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 20:28:57.397999   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.400298   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.400617   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.400648   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.400741   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:57.400891   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.401040   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.401133   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:57.401264   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:57.401505   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:57.401518   32635 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 20:28:57.506466   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:28:57.506489   32635 main.go:141] libmachine: Detecting the provisioner...
	I0612 20:28:57.506498   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.509058   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.509414   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.509451   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.509619   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:57.509808   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.509945   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.510036   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:57.510211   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:57.510369   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:57.510379   32635 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 20:28:57.620069   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 20:28:57.620147   32635 main.go:141] libmachine: found compatible host: buildroot
	I0612 20:28:57.620157   32635 main.go:141] libmachine: Provisioning with buildroot...
	I0612 20:28:57.620164   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetMachineName
	I0612 20:28:57.620394   32635 buildroot.go:166] provisioning hostname "ha-844626-m02"
	I0612 20:28:57.620420   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetMachineName
	I0612 20:28:57.620587   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.623458   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.623898   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.623920   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.624077   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:57.624257   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.624421   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.624577   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:57.624740   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:57.624954   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:57.624974   32635 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844626-m02 && echo "ha-844626-m02" | sudo tee /etc/hostname
	I0612 20:28:57.749484   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844626-m02
	
	I0612 20:28:57.749508   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.752103   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.752525   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.752552   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.752756   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:57.752943   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.753115   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.753242   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:57.753437   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:57.753585   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:57.753600   32635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844626-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844626-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844626-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 20:28:57.868256   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:28:57.868289   32635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 20:28:57.868310   32635 buildroot.go:174] setting up certificates
	I0612 20:28:57.868322   32635 provision.go:84] configureAuth start
	I0612 20:28:57.868334   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetMachineName
	I0612 20:28:57.868621   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:28:57.870970   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.871384   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.871404   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.871578   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.873675   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.873971   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.873998   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.874119   32635 provision.go:143] copyHostCerts
	I0612 20:28:57.874150   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:28:57.874180   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 20:28:57.874188   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:28:57.874249   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 20:28:57.874321   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:28:57.874339   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 20:28:57.874345   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:28:57.874369   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 20:28:57.874411   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:28:57.874441   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 20:28:57.874447   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:28:57.874475   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 20:28:57.874523   32635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.ha-844626-m02 san=[127.0.0.1 192.168.39.108 ha-844626-m02 localhost minikube]
	I0612 20:28:57.943494   32635 provision.go:177] copyRemoteCerts
	I0612 20:28:57.943546   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 20:28:57.943573   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.945926   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.946234   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.946263   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.946411   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:57.946596   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.946739   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:57.946878   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	I0612 20:28:58.029866   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0612 20:28:58.029924   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 20:28:58.054564   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0612 20:28:58.054630   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0612 20:28:58.077787   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0612 20:28:58.077838   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 20:28:58.102564   32635 provision.go:87] duration metric: took 234.230123ms to configureAuth
	I0612 20:28:58.102588   32635 buildroot.go:189] setting minikube options for container-runtime
	I0612 20:28:58.102781   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:28:58.102856   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:58.105183   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.105582   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.105609   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.105780   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:58.105958   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.106118   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.106241   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:58.106395   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:58.106547   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:58.106560   32635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 20:28:58.369002   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 20:28:58.369044   32635 main.go:141] libmachine: Checking connection to Docker...
	I0612 20:28:58.369056   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetURL
	I0612 20:28:58.370275   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Using libvirt version 6000000
	I0612 20:28:58.372493   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.372917   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.372946   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.373091   32635 main.go:141] libmachine: Docker is up and running!
	I0612 20:28:58.373106   32635 main.go:141] libmachine: Reticulating splines...
	I0612 20:28:58.373112   32635 client.go:171] duration metric: took 24.050299211s to LocalClient.Create
	I0612 20:28:58.373135   32635 start.go:167] duration metric: took 24.050353188s to libmachine.API.Create "ha-844626"
	I0612 20:28:58.373153   32635 start.go:293] postStartSetup for "ha-844626-m02" (driver="kvm2")
	I0612 20:28:58.373169   32635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 20:28:58.373192   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:58.373410   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 20:28:58.373429   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:58.375789   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.376115   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.376139   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.376262   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:58.376430   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.376585   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:58.376724   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	I0612 20:28:58.461494   32635 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 20:28:58.465756   32635 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 20:28:58.465773   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 20:28:58.465842   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 20:28:58.465945   32635 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 20:28:58.465956   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /etc/ssl/certs/214442.pem
	I0612 20:28:58.466033   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 20:28:58.475007   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:28:58.498583   32635 start.go:296] duration metric: took 125.415488ms for postStartSetup
	I0612 20:28:58.498630   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetConfigRaw
	I0612 20:28:58.499244   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:28:58.501609   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.501916   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.501943   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.502145   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:28:58.502300   32635 start.go:128] duration metric: took 24.197154786s to createHost
	I0612 20:28:58.502327   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:58.504428   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.504751   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.504779   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.504909   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:58.505058   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.505206   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.505333   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:58.505505   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:58.505675   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:58.505691   32635 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 20:28:58.611926   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224138.588199886
	
	I0612 20:28:58.611948   32635 fix.go:216] guest clock: 1718224138.588199886
	I0612 20:28:58.611956   32635 fix.go:229] Guest: 2024-06-12 20:28:58.588199886 +0000 UTC Remote: 2024-06-12 20:28:58.502310999 +0000 UTC m=+77.562759418 (delta=85.888887ms)
	I0612 20:28:58.611969   32635 fix.go:200] guest clock delta is within tolerance: 85.888887ms
	I0612 20:28:58.611974   32635 start.go:83] releasing machines lock for "ha-844626-m02", held for 24.306954637s
	I0612 20:28:58.611990   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:58.612277   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:28:58.614893   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.615331   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.615360   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.617819   32635 out.go:177] * Found network options:
	I0612 20:28:58.619210   32635 out.go:177]   - NO_PROXY=192.168.39.196
	W0612 20:28:58.620341   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 20:28:58.620364   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:58.620832   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:58.621001   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:58.621053   32635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 20:28:58.621093   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	W0612 20:28:58.621186   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 20:28:58.621263   32635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 20:28:58.621283   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:58.623686   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.623949   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.624031   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.624057   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.624200   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:58.624331   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.624350   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.624390   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.624498   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:58.624671   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.624687   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:58.624855   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	I0612 20:28:58.624931   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:58.625116   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	I0612 20:28:58.876599   32635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 20:28:58.882927   32635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 20:28:58.882992   32635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 20:28:58.899597   32635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 20:28:58.899627   32635 start.go:494] detecting cgroup driver to use...
	I0612 20:28:58.899682   32635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 20:28:58.918649   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 20:28:58.935105   32635 docker.go:217] disabling cri-docker service (if available) ...
	I0612 20:28:58.935189   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 20:28:58.951300   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 20:28:58.967318   32635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 20:28:59.089527   32635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 20:28:59.248030   32635 docker.go:233] disabling docker service ...
	I0612 20:28:59.248104   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 20:28:59.262936   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 20:28:59.276351   32635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 20:28:59.401042   32635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 20:28:59.537934   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 20:28:59.552277   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 20:28:59.571272   32635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 20:28:59.571324   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.581780   32635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 20:28:59.581825   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.593899   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.605609   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.616721   32635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 20:28:59.628658   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.639910   32635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.659020   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.670101   32635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 20:28:59.681844   32635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 20:28:59.681903   32635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 20:28:59.696113   32635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 20:28:59.705625   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:28:59.831551   32635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 20:28:59.977684   32635 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 20:28:59.977760   32635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 20:28:59.982563   32635 start.go:562] Will wait 60s for crictl version
	I0612 20:28:59.982603   32635 ssh_runner.go:195] Run: which crictl
	I0612 20:28:59.986337   32635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 20:29:00.028823   32635 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 20:29:00.028916   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:29:00.057647   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:29:00.087556   32635 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 20:29:00.088913   32635 out.go:177]   - env NO_PROXY=192.168.39.196
	I0612 20:29:00.090027   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:29:00.092690   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:29:00.093031   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:29:00.093061   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:29:00.093300   32635 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 20:29:00.097686   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:29:00.110349   32635 mustload.go:65] Loading cluster: ha-844626
	I0612 20:29:00.110562   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:29:00.110852   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:29:00.110876   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:29:00.125518   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38793
	I0612 20:29:00.125906   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:29:00.126358   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:29:00.126382   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:29:00.126721   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:29:00.126923   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:29:00.128554   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:29:00.128849   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:29:00.128879   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:29:00.143233   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38125
	I0612 20:29:00.143632   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:29:00.144016   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:29:00.144034   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:29:00.144329   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:29:00.144500   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:29:00.144652   32635 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626 for IP: 192.168.39.108
	I0612 20:29:00.144663   32635 certs.go:194] generating shared ca certs ...
	I0612 20:29:00.144677   32635 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:29:00.144812   32635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 20:29:00.144865   32635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 20:29:00.144877   32635 certs.go:256] generating profile certs ...
	I0612 20:29:00.144960   32635 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key
	I0612 20:29:00.145001   32635 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.059a86cd
	I0612 20:29:00.145021   32635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.059a86cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.108 192.168.39.254]
	I0612 20:29:00.584225   32635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.059a86cd ...
	I0612 20:29:00.584254   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.059a86cd: {Name:mkf7f603aba2d032d0ddac91ace726374be7c03e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:29:00.584414   32635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.059a86cd ...
	I0612 20:29:00.584428   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.059a86cd: {Name:mkaf0bf5abb5b3686773dca74b383000e538c998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:29:00.584501   32635 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.059a86cd -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt
	I0612 20:29:00.584630   32635 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.059a86cd -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key
	I0612 20:29:00.584748   32635 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key
	I0612 20:29:00.584766   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 20:29:00.584778   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0612 20:29:00.584788   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 20:29:00.584798   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 20:29:00.584811   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 20:29:00.584823   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 20:29:00.584836   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 20:29:00.584847   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 20:29:00.584893   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 20:29:00.584920   32635 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 20:29:00.584928   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 20:29:00.584950   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 20:29:00.584970   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 20:29:00.584991   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 20:29:00.585032   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:29:00.585057   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /usr/share/ca-certificates/214442.pem
	I0612 20:29:00.585071   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:29:00.585083   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem -> /usr/share/ca-certificates/21444.pem
	I0612 20:29:00.585112   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:29:00.587738   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:29:00.588068   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:29:00.588095   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:29:00.588340   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:29:00.588551   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:29:00.588725   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:29:00.588868   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:29:00.659476   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0612 20:29:00.664422   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0612 20:29:00.675398   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0612 20:29:00.679539   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0612 20:29:00.691907   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0612 20:29:00.698123   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0612 20:29:00.708858   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0612 20:29:00.712859   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0612 20:29:00.723264   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0612 20:29:00.727309   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0612 20:29:00.737484   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0612 20:29:00.741545   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0612 20:29:00.752784   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 20:29:00.779695   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 20:29:00.804527   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 20:29:00.828348   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 20:29:00.852299   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0612 20:29:00.877114   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 20:29:00.903800   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 20:29:00.927395   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 20:29:00.951574   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 20:29:00.975157   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 20:29:00.998299   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 20:29:01.021449   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0612 20:29:01.038090   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0612 20:29:01.053868   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0612 20:29:01.070256   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0612 20:29:01.087166   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0612 20:29:01.103247   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0612 20:29:01.119210   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0612 20:29:01.135180   32635 ssh_runner.go:195] Run: openssl version
	I0612 20:29:01.140742   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 20:29:01.150801   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 20:29:01.155105   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 20:29:01.155153   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 20:29:01.160789   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 20:29:01.171116   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 20:29:01.181677   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:29:01.186153   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:29:01.186187   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:29:01.191804   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 20:29:01.201928   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 20:29:01.211840   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 20:29:01.216015   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 20:29:01.216068   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 20:29:01.221627   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 20:29:01.231647   32635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 20:29:01.235662   32635 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 20:29:01.235717   32635 kubeadm.go:928] updating node {m02 192.168.39.108 8443 v1.30.1 crio true true} ...
	I0612 20:29:01.235801   32635 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844626-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 20:29:01.235825   32635 kube-vip.go:115] generating kube-vip config ...
	I0612 20:29:01.235857   32635 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 20:29:01.251220   32635 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 20:29:01.251298   32635 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0612 20:29:01.251361   32635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 20:29:01.261705   32635 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0612 20:29:01.261774   32635 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0612 20:29:01.271597   32635 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0612 20:29:01.271624   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 20:29:01.271688   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 20:29:01.271700   32635 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0612 20:29:01.271724   32635 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0612 20:29:01.277634   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0612 20:29:01.277663   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0612 20:29:35.260896   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 20:29:35.260971   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 20:29:35.267701   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0612 20:29:35.267734   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0612 20:30:04.149643   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:30:04.167195   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 20:30:04.167292   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 20:30:04.172548   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0612 20:30:04.172583   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0612 20:30:04.597711   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0612 20:30:04.609658   32635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0612 20:30:04.628872   32635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 20:30:04.648368   32635 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0612 20:30:04.668237   32635 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0612 20:30:04.673102   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:30:04.688620   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:30:04.817286   32635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:30:04.835873   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:30:04.836330   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:30:04.836397   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:30:04.851181   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0612 20:30:04.851694   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:30:04.852206   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:30:04.852230   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:30:04.852564   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:30:04.852806   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:30:04.852975   32635 start.go:316] joinCluster: &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:30:04.853091   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0612 20:30:04.853109   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:30:04.856247   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:30:04.856732   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:30:04.856761   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:30:04.856957   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:30:04.857130   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:30:04.857326   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:30:04.857490   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:30:05.018629   32635 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:30:05.018685   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hnkl3e.rou3l3k48xkgmpst --discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844626-m02 --control-plane --apiserver-advertise-address=192.168.39.108 --apiserver-bind-port=8443"
	I0612 20:30:27.495808   32635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hnkl3e.rou3l3k48xkgmpst --discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844626-m02 --control-plane --apiserver-advertise-address=192.168.39.108 --apiserver-bind-port=8443": (22.477083857s)
	I0612 20:30:27.495845   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0612 20:30:28.088244   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844626-m02 minikube.k8s.io/updated_at=2024_06_12T20_30_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=ha-844626 minikube.k8s.io/primary=false
	I0612 20:30:28.204508   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844626-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0612 20:30:28.312684   32635 start.go:318] duration metric: took 23.459705481s to joinCluster
	I0612 20:30:28.312755   32635 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:30:28.314300   32635 out.go:177] * Verifying Kubernetes components...
	I0612 20:30:28.313074   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:30:28.316145   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:30:28.565704   32635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:30:28.626283   32635 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:30:28.626628   32635 kapi.go:59] client config for ha-844626: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt", KeyFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key", CAFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfb000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0612 20:30:28.626714   32635 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.196:8443
	I0612 20:30:28.626982   32635 node_ready.go:35] waiting up to 6m0s for node "ha-844626-m02" to be "Ready" ...
	I0612 20:30:28.627077   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:28.627089   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:28.627101   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:28.627112   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:28.644797   32635 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0612 20:30:29.127366   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:29.127389   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:29.127398   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:29.127402   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:29.133873   32635 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 20:30:29.627278   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:29.627300   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:29.627308   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:29.627312   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:29.633747   32635 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 20:30:30.127280   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:30.127303   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:30.127311   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:30.127314   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:30.132068   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:30.628053   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:30.628079   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:30.628086   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:30.628095   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:30.631828   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:30.632601   32635 node_ready.go:53] node "ha-844626-m02" has status "Ready":"False"
	I0612 20:30:31.127764   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:31.127786   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:31.127793   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:31.127797   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:31.131236   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:31.627205   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:31.627228   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:31.627237   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:31.627240   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:31.630142   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:32.128033   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:32.128060   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:32.128070   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:32.128075   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:32.132429   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:32.627370   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:32.627395   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:32.627406   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:32.627411   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:32.630683   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:33.127539   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:33.127559   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:33.127566   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:33.127570   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:33.131292   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:33.132188   32635 node_ready.go:53] node "ha-844626-m02" has status "Ready":"False"
	I0612 20:30:33.627727   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:33.627749   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:33.627757   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:33.627761   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:33.631079   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:34.127319   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:34.127346   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:34.127358   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:34.127382   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:34.133781   32635 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 20:30:34.627337   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:34.627371   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:34.627383   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:34.627395   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:34.630687   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:35.127279   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:35.127310   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:35.127321   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:35.127327   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:35.131713   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:35.132298   32635 node_ready.go:53] node "ha-844626-m02" has status "Ready":"False"
	I0612 20:30:35.627486   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:35.627512   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:35.627520   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:35.627524   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:35.632682   32635 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 20:30:36.128136   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:36.128159   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:36.128169   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:36.128174   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:36.132160   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:36.628244   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:36.628274   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:36.628285   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:36.628290   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:36.631786   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:37.127909   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:37.127933   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.127942   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.127947   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.132108   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:37.132653   32635 node_ready.go:49] node "ha-844626-m02" has status "Ready":"True"
	I0612 20:30:37.132670   32635 node_ready.go:38] duration metric: took 8.505668168s for node "ha-844626-m02" to be "Ready" ...
	I0612 20:30:37.132678   32635 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:30:37.132747   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:30:37.132761   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.132767   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.132772   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.138422   32635 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 20:30:37.146045   32635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.146114   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bqzvn
	I0612 20:30:37.146123   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.146130   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.146134   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.149376   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:37.150108   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:37.150126   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.150136   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.150143   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.152828   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.153449   32635 pod_ready.go:92] pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:37.153465   32635 pod_ready.go:81] duration metric: took 7.398951ms for pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.153476   32635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.153526   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lxd6n
	I0612 20:30:37.153536   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.153546   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.153555   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.156152   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.156651   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:37.156663   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.156670   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.156674   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.158912   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.159577   32635 pod_ready.go:92] pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:37.159595   32635 pod_ready.go:81] duration metric: took 6.112307ms for pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.159606   32635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.159656   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626
	I0612 20:30:37.159666   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.159676   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.159681   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.161869   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.162386   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:37.162399   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.162404   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.162409   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.164686   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.165142   32635 pod_ready.go:92] pod "etcd-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:37.165154   32635 pod_ready.go:81] duration metric: took 5.543189ms for pod "etcd-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.165161   32635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.165228   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:37.165237   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.165245   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.165251   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.167587   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.168062   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:37.168074   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.168081   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.168084   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.170706   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.665823   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:37.665843   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.665851   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.665855   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.669692   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:37.670698   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:37.670711   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.670719   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.670731   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.673438   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:38.166072   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:38.166096   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:38.166108   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:38.166115   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:38.169307   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:38.169965   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:38.169983   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:38.169990   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:38.169993   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:38.173351   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:38.666212   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:38.666235   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:38.666247   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:38.666254   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:38.669485   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:38.670105   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:38.670119   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:38.670126   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:38.670130   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:38.672944   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:39.165744   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:39.165767   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:39.165774   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:39.165778   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:39.169896   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:39.171068   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:39.171088   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:39.171098   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:39.171105   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:39.174363   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:39.175307   32635 pod_ready.go:102] pod "etcd-ha-844626-m02" in "kube-system" namespace has status "Ready":"False"
	I0612 20:30:39.665878   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:39.665899   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:39.665908   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:39.665912   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:39.669567   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:39.670178   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:39.670198   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:39.670205   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:39.670209   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:39.673057   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:40.166324   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:40.166346   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.166359   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.166364   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.169985   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:40.170799   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:40.170815   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.170823   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.170829   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.173509   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:40.665467   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:40.665488   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.665495   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.665499   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.670274   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:40.671068   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:40.671086   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.671096   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.671102   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.675028   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:40.675988   32635 pod_ready.go:92] pod "etcd-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:40.676010   32635 pod_ready.go:81] duration metric: took 3.510836172s for pod "etcd-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.676030   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.676097   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626
	I0612 20:30:40.676107   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.676117   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.676124   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.679035   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:40.679983   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:40.679996   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.680005   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.680010   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.682159   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:40.682753   32635 pod_ready.go:92] pod "kube-apiserver-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:40.682767   32635 pod_ready.go:81] duration metric: took 6.726967ms for pod "kube-apiserver-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.682779   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.682839   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626-m02
	I0612 20:30:40.682849   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.682859   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.682869   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.685046   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:40.728756   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:40.728776   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.728785   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.728789   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.732550   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:40.733159   32635 pod_ready.go:92] pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:40.733177   32635 pod_ready.go:81] duration metric: took 50.388231ms for pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.733186   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.928599   32635 request.go:629] Waited for 195.361017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626
	I0612 20:30:40.928683   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626
	I0612 20:30:40.928692   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.928699   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.928704   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.931955   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:41.128801   32635 request.go:629] Waited for 195.731901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:41.128869   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:41.128874   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:41.128881   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:41.128889   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:41.132053   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:41.132799   32635 pod_ready.go:92] pod "kube-controller-manager-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:41.132816   32635 pod_ready.go:81] duration metric: took 399.625232ms for pod "kube-controller-manager-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:41.132825   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:41.328901   32635 request.go:629] Waited for 196.017593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:41.328966   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:41.328971   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:41.328978   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:41.328982   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:41.331865   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:41.528889   32635 request.go:629] Waited for 196.344493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:41.528936   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:41.528941   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:41.528949   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:41.528953   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:41.532714   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:41.728477   32635 request.go:629] Waited for 95.271831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:41.728538   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:41.728557   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:41.728570   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:41.728580   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:41.732043   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:41.928219   32635 request.go:629] Waited for 195.361825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:41.928276   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:41.928291   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:41.928298   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:41.928305   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:41.931981   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:42.133202   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:42.133221   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:42.133229   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:42.133234   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:42.136376   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:42.328629   32635 request.go:629] Waited for 191.365593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:42.328707   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:42.328715   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:42.328725   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:42.328730   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:42.332363   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:42.633285   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:42.633310   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:42.633320   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:42.633327   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:42.636925   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:42.728953   32635 request.go:629] Waited for 91.26389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:42.729002   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:42.729014   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:42.729031   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:42.729037   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:42.732484   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:43.133607   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:43.133629   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:43.133636   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:43.133640   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:43.137400   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:43.138180   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:43.138198   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:43.138208   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:43.138214   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:43.141025   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:43.141740   32635 pod_ready.go:102] pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace has status "Ready":"False"
	I0612 20:30:43.633276   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:43.633299   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:43.633310   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:43.633315   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:43.636399   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:43.637201   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:43.637225   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:43.637233   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:43.637237   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:43.640005   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:44.133098   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:44.133126   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:44.133139   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:44.133143   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:44.138749   32635 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 20:30:44.139503   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:44.139528   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:44.139535   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:44.139542   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:44.142326   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:44.633872   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:44.633893   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:44.633901   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:44.633904   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:44.637714   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:44.638335   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:44.638351   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:44.638362   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:44.638368   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:44.641221   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:45.133802   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:45.133824   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:45.133831   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:45.133835   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:45.136674   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:45.137376   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:45.137389   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:45.137397   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:45.137401   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:45.140982   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:45.633810   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:45.633832   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:45.633840   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:45.633843   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:45.637360   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:45.637973   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:45.637989   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:45.637999   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:45.638005   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:45.640544   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:45.641041   32635 pod_ready.go:102] pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace has status "Ready":"False"
	I0612 20:30:46.133029   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:46.133052   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.133059   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.133065   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.136158   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:46.137033   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:46.137047   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.137054   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.137058   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.139843   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:46.140461   32635 pod_ready.go:92] pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:46.140480   32635 pod_ready.go:81] duration metric: took 5.007648409s for pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.140489   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-69ctp" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.140558   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-69ctp
	I0612 20:30:46.140571   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.140580   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.140587   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.143798   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:46.144406   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:46.144419   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.144425   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.144435   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.153535   32635 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 20:30:46.154204   32635 pod_ready.go:92] pod "kube-proxy-69ctp" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:46.154222   32635 pod_ready.go:81] duration metric: took 13.726572ms for pod "kube-proxy-69ctp" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.154231   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7ct8" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.154287   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f7ct8
	I0612 20:30:46.154294   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.154302   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.154309   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.156591   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:46.328647   32635 request.go:629] Waited for 171.371767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:46.328700   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:46.328707   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.328714   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.328720   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.331955   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:46.333387   32635 pod_ready.go:92] pod "kube-proxy-f7ct8" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:46.333406   32635 pod_ready.go:81] duration metric: took 179.1699ms for pod "kube-proxy-f7ct8" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.333416   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.528928   32635 request.go:629] Waited for 195.451982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626
	I0612 20:30:46.528997   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626
	I0612 20:30:46.529005   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.529016   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.529021   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.532898   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:46.728720   32635 request.go:629] Waited for 195.095407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:46.728799   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:46.728807   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.728818   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.728828   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.732323   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:46.733062   32635 pod_ready.go:92] pod "kube-scheduler-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:46.733082   32635 pod_ready.go:81] duration metric: took 399.660168ms for pod "kube-scheduler-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.733096   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.928012   32635 request.go:629] Waited for 194.843878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m02
	I0612 20:30:46.928085   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m02
	I0612 20:30:46.928091   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.928099   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.928103   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.931580   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:47.128657   32635 request.go:629] Waited for 196.421042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:47.128741   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:47.128755   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.128764   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.128776   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.132817   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:47.133589   32635 pod_ready.go:92] pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:47.133609   32635 pod_ready.go:81] duration metric: took 400.502952ms for pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:47.133623   32635 pod_ready.go:38] duration metric: took 10.000934337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:30:47.133647   32635 api_server.go:52] waiting for apiserver process to appear ...
	I0612 20:30:47.133705   32635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:30:47.149876   32635 api_server.go:72] duration metric: took 18.837089852s to wait for apiserver process to appear ...
	I0612 20:30:47.149901   32635 api_server.go:88] waiting for apiserver healthz status ...
	I0612 20:30:47.149916   32635 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0612 20:30:47.157443   32635 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0612 20:30:47.157515   32635 round_trippers.go:463] GET https://192.168.39.196:8443/version
	I0612 20:30:47.157527   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.157539   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.157549   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.159286   32635 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 20:30:47.159522   32635 api_server.go:141] control plane version: v1.30.1
	I0612 20:30:47.159544   32635 api_server.go:131] duration metric: took 9.636955ms to wait for apiserver health ...
	I0612 20:30:47.159557   32635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 20:30:47.327918   32635 request.go:629] Waited for 168.289713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:30:47.327976   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:30:47.328004   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.328012   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.328017   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.335146   32635 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 20:30:47.340883   32635 system_pods.go:59] 17 kube-system pods found
	I0612 20:30:47.340913   32635 system_pods.go:61] "coredns-7db6d8ff4d-bqzvn" [b22b3ba0-1a59-4066-9db5-380986d73dca] Running
	I0612 20:30:47.340919   32635 system_pods.go:61] "coredns-7db6d8ff4d-lxd6n" [65d25d78-6fa7-4dc7-9cf2-e2fac796f194] Running
	I0612 20:30:47.340925   32635 system_pods.go:61] "etcd-ha-844626" [73812d48-addc-4957-ae24-6bbad2f5fbaa] Running
	I0612 20:30:47.340930   32635 system_pods.go:61] "etcd-ha-844626-m02" [57d89f35-94d4-4b64-a648-c440eaddef2a] Running
	I0612 20:30:47.340934   32635 system_pods.go:61] "kindnet-fz6bl" [fb946e9f-19cd-4a9f-8585-88118c840922] Running
	I0612 20:30:47.340939   32635 system_pods.go:61] "kindnet-mthnq" [49950bb0-368d-4239-ae93-04c980a8b531] Running
	I0612 20:30:47.340943   32635 system_pods.go:61] "kube-apiserver-ha-844626" [0e8ba551-e997-453a-b76f-a090a441bce4] Running
	I0612 20:30:47.340948   32635 system_pods.go:61] "kube-apiserver-ha-844626-m02" [eeaf9c1b-e433-4de6-b6e8-4c33cd467a42] Running
	I0612 20:30:47.340952   32635 system_pods.go:61] "kube-controller-manager-ha-844626" [9bca7a0a-74d1-4b9c-9915-2cf6a4eb5e52] Running
	I0612 20:30:47.340958   32635 system_pods.go:61] "kube-controller-manager-ha-844626-m02" [6e26986e-06e4-4e85-b83d-57c2254732f0] Running
	I0612 20:30:47.340963   32635 system_pods.go:61] "kube-proxy-69ctp" [c66149e8-2a69-4f1f-9ddc-5e272204e6f5] Running
	I0612 20:30:47.340968   32635 system_pods.go:61] "kube-proxy-f7ct8" [4bf3e7e1-68e8-4d0d-980b-cb5055e10365] Running
	I0612 20:30:47.340976   32635 system_pods.go:61] "kube-scheduler-ha-844626" [49238394-1429-40ce-8d74-290b1743547f] Running
	I0612 20:30:47.340986   32635 system_pods.go:61] "kube-scheduler-ha-844626-m02" [488c0960-8abb-40d1-a92e-bd4f61b5973b] Running
	I0612 20:30:47.340992   32635 system_pods.go:61] "kube-vip-ha-844626" [654fd183-21b0-4df5-b557-ed676c5ecb71] Running
	I0612 20:30:47.340999   32635 system_pods.go:61] "kube-vip-ha-844626-m02" [c7785d9d-bfc0-4f65-b853-36a7f2ba791b] Running
	I0612 20:30:47.341004   32635 system_pods.go:61] "storage-provisioner" [d94c16d7-da82-41e3-82fe-83ed6e581f69] Running
	I0612 20:30:47.341012   32635 system_pods.go:74] duration metric: took 181.444751ms to wait for pod list to return data ...
	I0612 20:30:47.341022   32635 default_sa.go:34] waiting for default service account to be created ...
	I0612 20:30:47.528373   32635 request.go:629] Waited for 187.26726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0612 20:30:47.528437   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0612 20:30:47.528443   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.528450   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.528454   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.532161   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:47.532374   32635 default_sa.go:45] found service account: "default"
	I0612 20:30:47.532392   32635 default_sa.go:55] duration metric: took 191.363691ms for default service account to be created ...
	I0612 20:30:47.532402   32635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 20:30:47.728905   32635 request.go:629] Waited for 196.437134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:30:47.728985   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:30:47.728995   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.729006   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.729013   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.734247   32635 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 20:30:47.738940   32635 system_pods.go:86] 17 kube-system pods found
	I0612 20:30:47.738961   32635 system_pods.go:89] "coredns-7db6d8ff4d-bqzvn" [b22b3ba0-1a59-4066-9db5-380986d73dca] Running
	I0612 20:30:47.738967   32635 system_pods.go:89] "coredns-7db6d8ff4d-lxd6n" [65d25d78-6fa7-4dc7-9cf2-e2fac796f194] Running
	I0612 20:30:47.738971   32635 system_pods.go:89] "etcd-ha-844626" [73812d48-addc-4957-ae24-6bbad2f5fbaa] Running
	I0612 20:30:47.738975   32635 system_pods.go:89] "etcd-ha-844626-m02" [57d89f35-94d4-4b64-a648-c440eaddef2a] Running
	I0612 20:30:47.738979   32635 system_pods.go:89] "kindnet-fz6bl" [fb946e9f-19cd-4a9f-8585-88118c840922] Running
	I0612 20:30:47.738985   32635 system_pods.go:89] "kindnet-mthnq" [49950bb0-368d-4239-ae93-04c980a8b531] Running
	I0612 20:30:47.738991   32635 system_pods.go:89] "kube-apiserver-ha-844626" [0e8ba551-e997-453a-b76f-a090a441bce4] Running
	I0612 20:30:47.738996   32635 system_pods.go:89] "kube-apiserver-ha-844626-m02" [eeaf9c1b-e433-4de6-b6e8-4c33cd467a42] Running
	I0612 20:30:47.739002   32635 system_pods.go:89] "kube-controller-manager-ha-844626" [9bca7a0a-74d1-4b9c-9915-2cf6a4eb5e52] Running
	I0612 20:30:47.739008   32635 system_pods.go:89] "kube-controller-manager-ha-844626-m02" [6e26986e-06e4-4e85-b83d-57c2254732f0] Running
	I0612 20:30:47.739012   32635 system_pods.go:89] "kube-proxy-69ctp" [c66149e8-2a69-4f1f-9ddc-5e272204e6f5] Running
	I0612 20:30:47.739017   32635 system_pods.go:89] "kube-proxy-f7ct8" [4bf3e7e1-68e8-4d0d-980b-cb5055e10365] Running
	I0612 20:30:47.739021   32635 system_pods.go:89] "kube-scheduler-ha-844626" [49238394-1429-40ce-8d74-290b1743547f] Running
	I0612 20:30:47.739025   32635 system_pods.go:89] "kube-scheduler-ha-844626-m02" [488c0960-8abb-40d1-a92e-bd4f61b5973b] Running
	I0612 20:30:47.739029   32635 system_pods.go:89] "kube-vip-ha-844626" [654fd183-21b0-4df5-b557-ed676c5ecb71] Running
	I0612 20:30:47.739032   32635 system_pods.go:89] "kube-vip-ha-844626-m02" [c7785d9d-bfc0-4f65-b853-36a7f2ba791b] Running
	I0612 20:30:47.739036   32635 system_pods.go:89] "storage-provisioner" [d94c16d7-da82-41e3-82fe-83ed6e581f69] Running
	I0612 20:30:47.739042   32635 system_pods.go:126] duration metric: took 206.634655ms to wait for k8s-apps to be running ...
	I0612 20:30:47.739051   32635 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 20:30:47.739091   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:30:47.756076   32635 system_svc.go:56] duration metric: took 17.016768ms WaitForService to wait for kubelet
	I0612 20:30:47.756104   32635 kubeadm.go:576] duration metric: took 19.443318841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:30:47.756129   32635 node_conditions.go:102] verifying NodePressure condition ...
	I0612 20:30:47.928545   32635 request.go:629] Waited for 172.345307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0612 20:30:47.928631   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0612 20:30:47.928636   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.928644   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.928649   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.932159   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:47.933103   32635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:30:47.933136   32635 node_conditions.go:123] node cpu capacity is 2
	I0612 20:30:47.933159   32635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:30:47.933165   32635 node_conditions.go:123] node cpu capacity is 2
	I0612 20:30:47.933171   32635 node_conditions.go:105] duration metric: took 177.036683ms to run NodePressure ...
	I0612 20:30:47.933188   32635 start.go:240] waiting for startup goroutines ...
	I0612 20:30:47.933223   32635 start.go:254] writing updated cluster config ...
	I0612 20:30:47.935417   32635 out.go:177] 
	I0612 20:30:47.937248   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:30:47.937377   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:30:47.939120   32635 out.go:177] * Starting "ha-844626-m03" control-plane node in "ha-844626" cluster
	I0612 20:30:47.940397   32635 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:30:47.940418   32635 cache.go:56] Caching tarball of preloaded images
	I0612 20:30:47.940501   32635 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 20:30:47.940512   32635 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 20:30:47.940588   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:30:47.940905   32635 start.go:360] acquireMachinesLock for ha-844626-m03: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 20:30:47.940945   32635 start.go:364] duration metric: took 22.098µs to acquireMachinesLock for "ha-844626-m03"
	I0612 20:30:47.940964   32635 start.go:93] Provisioning new machine with config: &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:30:47.941051   32635 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0612 20:30:47.943673   32635 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 20:30:47.943766   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:30:47.943798   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:30:47.959389   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35665
	I0612 20:30:47.959846   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:30:47.960359   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:30:47.960386   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:30:47.960716   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:30:47.960906   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetMachineName
	I0612 20:30:47.961019   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:30:47.961207   32635 start.go:159] libmachine.API.Create for "ha-844626" (driver="kvm2")
	I0612 20:30:47.961235   32635 client.go:168] LocalClient.Create starting
	I0612 20:30:47.961285   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 20:30:47.961327   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:30:47.961345   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:30:47.961413   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 20:30:47.961441   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:30:47.961449   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:30:47.961465   32635 main.go:141] libmachine: Running pre-create checks...
	I0612 20:30:47.961473   32635 main.go:141] libmachine: (ha-844626-m03) Calling .PreCreateCheck
	I0612 20:30:47.961648   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetConfigRaw
	I0612 20:30:47.962039   32635 main.go:141] libmachine: Creating machine...
	I0612 20:30:47.962053   32635 main.go:141] libmachine: (ha-844626-m03) Calling .Create
	I0612 20:30:47.962192   32635 main.go:141] libmachine: (ha-844626-m03) Creating KVM machine...
	I0612 20:30:47.963639   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found existing default KVM network
	I0612 20:30:47.963788   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found existing private KVM network mk-ha-844626
	I0612 20:30:47.963942   32635 main.go:141] libmachine: (ha-844626-m03) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03 ...
	I0612 20:30:47.963969   32635 main.go:141] libmachine: (ha-844626-m03) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 20:30:47.964005   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:47.963910   33685 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:30:47.964117   32635 main.go:141] libmachine: (ha-844626-m03) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 20:30:48.183671   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:48.183542   33685 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa...
	I0612 20:30:48.278689   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:48.278547   33685 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/ha-844626-m03.rawdisk...
	I0612 20:30:48.278719   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Writing magic tar header
	I0612 20:30:48.278729   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Writing SSH key tar header
	I0612 20:30:48.278737   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:48.278674   33685 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03 ...
	I0612 20:30:48.278843   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03
	I0612 20:30:48.278861   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 20:30:48.278875   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03 (perms=drwx------)
	I0612 20:30:48.278884   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:30:48.278893   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 20:30:48.278907   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 20:30:48.278913   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 20:30:48.278923   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 20:30:48.278929   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 20:30:48.278943   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 20:30:48.278952   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 20:30:48.278960   32635 main.go:141] libmachine: (ha-844626-m03) Creating domain...
	I0612 20:30:48.279067   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins
	I0612 20:30:48.279096   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home
	I0612 20:30:48.279130   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Skipping /home - not owner
	I0612 20:30:48.280171   32635 main.go:141] libmachine: (ha-844626-m03) define libvirt domain using xml: 
	I0612 20:30:48.280192   32635 main.go:141] libmachine: (ha-844626-m03) <domain type='kvm'>
	I0612 20:30:48.280202   32635 main.go:141] libmachine: (ha-844626-m03)   <name>ha-844626-m03</name>
	I0612 20:30:48.280211   32635 main.go:141] libmachine: (ha-844626-m03)   <memory unit='MiB'>2200</memory>
	I0612 20:30:48.280218   32635 main.go:141] libmachine: (ha-844626-m03)   <vcpu>2</vcpu>
	I0612 20:30:48.280230   32635 main.go:141] libmachine: (ha-844626-m03)   <features>
	I0612 20:30:48.280261   32635 main.go:141] libmachine: (ha-844626-m03)     <acpi/>
	I0612 20:30:48.280282   32635 main.go:141] libmachine: (ha-844626-m03)     <apic/>
	I0612 20:30:48.280293   32635 main.go:141] libmachine: (ha-844626-m03)     <pae/>
	I0612 20:30:48.280304   32635 main.go:141] libmachine: (ha-844626-m03)     
	I0612 20:30:48.280333   32635 main.go:141] libmachine: (ha-844626-m03)   </features>
	I0612 20:30:48.280356   32635 main.go:141] libmachine: (ha-844626-m03)   <cpu mode='host-passthrough'>
	I0612 20:30:48.280363   32635 main.go:141] libmachine: (ha-844626-m03)   
	I0612 20:30:48.280372   32635 main.go:141] libmachine: (ha-844626-m03)   </cpu>
	I0612 20:30:48.280380   32635 main.go:141] libmachine: (ha-844626-m03)   <os>
	I0612 20:30:48.280386   32635 main.go:141] libmachine: (ha-844626-m03)     <type>hvm</type>
	I0612 20:30:48.280395   32635 main.go:141] libmachine: (ha-844626-m03)     <boot dev='cdrom'/>
	I0612 20:30:48.280406   32635 main.go:141] libmachine: (ha-844626-m03)     <boot dev='hd'/>
	I0612 20:30:48.280415   32635 main.go:141] libmachine: (ha-844626-m03)     <bootmenu enable='no'/>
	I0612 20:30:48.280425   32635 main.go:141] libmachine: (ha-844626-m03)   </os>
	I0612 20:30:48.280433   32635 main.go:141] libmachine: (ha-844626-m03)   <devices>
	I0612 20:30:48.280443   32635 main.go:141] libmachine: (ha-844626-m03)     <disk type='file' device='cdrom'>
	I0612 20:30:48.280455   32635 main.go:141] libmachine: (ha-844626-m03)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/boot2docker.iso'/>
	I0612 20:30:48.280465   32635 main.go:141] libmachine: (ha-844626-m03)       <target dev='hdc' bus='scsi'/>
	I0612 20:30:48.280474   32635 main.go:141] libmachine: (ha-844626-m03)       <readonly/>
	I0612 20:30:48.280484   32635 main.go:141] libmachine: (ha-844626-m03)     </disk>
	I0612 20:30:48.280494   32635 main.go:141] libmachine: (ha-844626-m03)     <disk type='file' device='disk'>
	I0612 20:30:48.280504   32635 main.go:141] libmachine: (ha-844626-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 20:30:48.280514   32635 main.go:141] libmachine: (ha-844626-m03)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/ha-844626-m03.rawdisk'/>
	I0612 20:30:48.280522   32635 main.go:141] libmachine: (ha-844626-m03)       <target dev='hda' bus='virtio'/>
	I0612 20:30:48.280527   32635 main.go:141] libmachine: (ha-844626-m03)     </disk>
	I0612 20:30:48.280534   32635 main.go:141] libmachine: (ha-844626-m03)     <interface type='network'>
	I0612 20:30:48.280540   32635 main.go:141] libmachine: (ha-844626-m03)       <source network='mk-ha-844626'/>
	I0612 20:30:48.280546   32635 main.go:141] libmachine: (ha-844626-m03)       <model type='virtio'/>
	I0612 20:30:48.280552   32635 main.go:141] libmachine: (ha-844626-m03)     </interface>
	I0612 20:30:48.280563   32635 main.go:141] libmachine: (ha-844626-m03)     <interface type='network'>
	I0612 20:30:48.280576   32635 main.go:141] libmachine: (ha-844626-m03)       <source network='default'/>
	I0612 20:30:48.280588   32635 main.go:141] libmachine: (ha-844626-m03)       <model type='virtio'/>
	I0612 20:30:48.280607   32635 main.go:141] libmachine: (ha-844626-m03)     </interface>
	I0612 20:30:48.280626   32635 main.go:141] libmachine: (ha-844626-m03)     <serial type='pty'>
	I0612 20:30:48.280636   32635 main.go:141] libmachine: (ha-844626-m03)       <target port='0'/>
	I0612 20:30:48.280646   32635 main.go:141] libmachine: (ha-844626-m03)     </serial>
	I0612 20:30:48.280657   32635 main.go:141] libmachine: (ha-844626-m03)     <console type='pty'>
	I0612 20:30:48.280668   32635 main.go:141] libmachine: (ha-844626-m03)       <target type='serial' port='0'/>
	I0612 20:30:48.280676   32635 main.go:141] libmachine: (ha-844626-m03)     </console>
	I0612 20:30:48.280686   32635 main.go:141] libmachine: (ha-844626-m03)     <rng model='virtio'>
	I0612 20:30:48.280701   32635 main.go:141] libmachine: (ha-844626-m03)       <backend model='random'>/dev/random</backend>
	I0612 20:30:48.280715   32635 main.go:141] libmachine: (ha-844626-m03)     </rng>
	I0612 20:30:48.280726   32635 main.go:141] libmachine: (ha-844626-m03)     
	I0612 20:30:48.280737   32635 main.go:141] libmachine: (ha-844626-m03)     
	I0612 20:30:48.280745   32635 main.go:141] libmachine: (ha-844626-m03)   </devices>
	I0612 20:30:48.280755   32635 main.go:141] libmachine: (ha-844626-m03) </domain>
	I0612 20:30:48.280765   32635 main.go:141] libmachine: (ha-844626-m03) 
	I0612 20:30:48.287742   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:9b:b8:26 in network default
	I0612 20:30:48.288414   32635 main.go:141] libmachine: (ha-844626-m03) Ensuring networks are active...
	I0612 20:30:48.288449   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:48.289226   32635 main.go:141] libmachine: (ha-844626-m03) Ensuring network default is active
	I0612 20:30:48.289688   32635 main.go:141] libmachine: (ha-844626-m03) Ensuring network mk-ha-844626 is active
	I0612 20:30:48.290056   32635 main.go:141] libmachine: (ha-844626-m03) Getting domain xml...
	I0612 20:30:48.290712   32635 main.go:141] libmachine: (ha-844626-m03) Creating domain...
	I0612 20:30:49.530435   32635 main.go:141] libmachine: (ha-844626-m03) Waiting to get IP...
	I0612 20:30:49.531208   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:49.531694   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:49.531731   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:49.531676   33685 retry.go:31] will retry after 288.871984ms: waiting for machine to come up
	I0612 20:30:49.822409   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:49.822897   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:49.822926   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:49.822858   33685 retry.go:31] will retry after 248.487043ms: waiting for machine to come up
	I0612 20:30:50.073378   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:50.074000   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:50.074032   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:50.073942   33685 retry.go:31] will retry after 462.366809ms: waiting for machine to come up
	I0612 20:30:50.537464   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:50.537883   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:50.537920   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:50.537831   33685 retry.go:31] will retry after 483.777516ms: waiting for machine to come up
	I0612 20:30:51.023503   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:51.023968   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:51.023998   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:51.023922   33685 retry.go:31] will retry after 745.471957ms: waiting for machine to come up
	I0612 20:30:51.770915   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:51.771388   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:51.771418   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:51.771330   33685 retry.go:31] will retry after 847.558263ms: waiting for machine to come up
	I0612 20:30:52.620418   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:52.620789   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:52.620818   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:52.620736   33685 retry.go:31] will retry after 856.076838ms: waiting for machine to come up
	I0612 20:30:53.478317   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:53.478753   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:53.478782   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:53.478715   33685 retry.go:31] will retry after 1.102009532s: waiting for machine to come up
	I0612 20:30:54.582139   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:54.582598   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:54.582631   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:54.582547   33685 retry.go:31] will retry after 1.62493678s: waiting for machine to come up
	I0612 20:30:56.209482   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:56.209972   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:56.210002   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:56.209923   33685 retry.go:31] will retry after 2.048125966s: waiting for machine to come up
	I0612 20:30:58.259821   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:58.260459   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:58.260495   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:58.260396   33685 retry.go:31] will retry after 2.165398236s: waiting for machine to come up
	I0612 20:31:00.428290   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:00.428804   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:31:00.428829   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:31:00.428752   33685 retry.go:31] will retry after 3.00838211s: waiting for machine to come up
	I0612 20:31:03.439244   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:03.439728   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:31:03.439749   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:31:03.439679   33685 retry.go:31] will retry after 4.481196758s: waiting for machine to come up
	I0612 20:31:07.925066   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:07.925573   32635 main.go:141] libmachine: (ha-844626-m03) Found IP for machine: 192.168.39.76
	I0612 20:31:07.925610   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has current primary IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:07.925619   32635 main.go:141] libmachine: (ha-844626-m03) Reserving static IP address...
	I0612 20:31:07.926018   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find host DHCP lease matching {name: "ha-844626-m03", mac: "52:54:00:81:de:69", ip: "192.168.39.76"} in network mk-ha-844626
	I0612 20:31:08.000537   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Getting to WaitForSSH function...
	I0612 20:31:08.000565   32635 main.go:141] libmachine: (ha-844626-m03) Reserved static IP address: 192.168.39.76
	I0612 20:31:08.000577   32635 main.go:141] libmachine: (ha-844626-m03) Waiting for SSH to be available...
	I0612 20:31:08.003095   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.003569   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:minikube Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.003602   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.003791   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Using SSH client type: external
	I0612 20:31:08.003815   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa (-rw-------)
	I0612 20:31:08.003843   32635 main.go:141] libmachine: (ha-844626-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 20:31:08.003863   32635 main.go:141] libmachine: (ha-844626-m03) DBG | About to run SSH command:
	I0612 20:31:08.003876   32635 main.go:141] libmachine: (ha-844626-m03) DBG | exit 0
	I0612 20:31:08.127361   32635 main.go:141] libmachine: (ha-844626-m03) DBG | SSH cmd err, output: <nil>: 
	I0612 20:31:08.127629   32635 main.go:141] libmachine: (ha-844626-m03) KVM machine creation complete!
	I0612 20:31:08.127956   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetConfigRaw
	I0612 20:31:08.128477   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:08.128632   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:08.128760   32635 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 20:31:08.128771   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:31:08.129987   32635 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 20:31:08.130000   32635 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 20:31:08.130006   32635 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 20:31:08.130016   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.132310   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.132657   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.132689   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.132766   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:08.132971   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.133168   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.133307   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:08.133497   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:08.133692   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:08.133706   32635 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 20:31:08.234624   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:31:08.234650   32635 main.go:141] libmachine: Detecting the provisioner...
	I0612 20:31:08.234662   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.238508   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.238950   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.238980   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.239113   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:08.239307   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.239435   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.239596   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:08.239718   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:08.239899   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:08.239913   32635 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 20:31:08.344252   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 20:31:08.344327   32635 main.go:141] libmachine: found compatible host: buildroot
	I0612 20:31:08.344336   32635 main.go:141] libmachine: Provisioning with buildroot...
	I0612 20:31:08.344353   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetMachineName
	I0612 20:31:08.344580   32635 buildroot.go:166] provisioning hostname "ha-844626-m03"
	I0612 20:31:08.344594   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetMachineName
	I0612 20:31:08.344758   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.347365   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.347673   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.347700   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.347855   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:08.348041   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.348198   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.348322   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:08.348469   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:08.348621   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:08.348632   32635 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844626-m03 && echo "ha-844626-m03" | sudo tee /etc/hostname
	I0612 20:31:08.465878   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844626-m03
	
	I0612 20:31:08.465909   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.468578   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.468989   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.469019   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.469206   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:08.469432   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.469619   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.469762   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:08.469917   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:08.470071   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:08.470086   32635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844626-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844626-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844626-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 20:31:08.580790   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:31:08.580817   32635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 20:31:08.580833   32635 buildroot.go:174] setting up certificates
	I0612 20:31:08.580842   32635 provision.go:84] configureAuth start
	I0612 20:31:08.580850   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetMachineName
	I0612 20:31:08.581161   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:31:08.584514   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.584914   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.584939   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.585132   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.587586   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.587900   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.587928   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.588070   32635 provision.go:143] copyHostCerts
	I0612 20:31:08.588113   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:31:08.588155   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 20:31:08.588168   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:31:08.588241   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 20:31:08.588319   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:31:08.588339   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 20:31:08.588346   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:31:08.588371   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 20:31:08.588429   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:31:08.588446   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 20:31:08.588452   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:31:08.588472   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 20:31:08.588516   32635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.ha-844626-m03 san=[127.0.0.1 192.168.39.76 ha-844626-m03 localhost minikube]
	I0612 20:31:08.985254   32635 provision.go:177] copyRemoteCerts
	I0612 20:31:08.985309   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 20:31:08.985330   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.987927   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.988302   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.988325   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.988518   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:08.988720   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.988898   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:08.989051   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:31:09.071188   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0612 20:31:09.071278   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 20:31:09.096872   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0612 20:31:09.096928   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0612 20:31:09.121719   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0612 20:31:09.121792   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 20:31:09.147728   32635 provision.go:87] duration metric: took 566.87254ms to configureAuth
	I0612 20:31:09.147762   32635 buildroot.go:189] setting minikube options for container-runtime
	I0612 20:31:09.147995   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:31:09.148098   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:09.150549   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.150883   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.150913   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.151009   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:09.151220   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.151383   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.151514   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:09.151669   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:09.151819   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:09.151833   32635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 20:31:09.429751   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 20:31:09.429783   32635 main.go:141] libmachine: Checking connection to Docker...
	I0612 20:31:09.429796   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetURL
	I0612 20:31:09.431160   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Using libvirt version 6000000
	I0612 20:31:09.433450   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.433884   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.433915   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.434123   32635 main.go:141] libmachine: Docker is up and running!
	I0612 20:31:09.434135   32635 main.go:141] libmachine: Reticulating splines...
	I0612 20:31:09.434141   32635 client.go:171] duration metric: took 21.472896203s to LocalClient.Create
	I0612 20:31:09.434161   32635 start.go:167] duration metric: took 21.472955338s to libmachine.API.Create "ha-844626"
	I0612 20:31:09.434171   32635 start.go:293] postStartSetup for "ha-844626-m03" (driver="kvm2")
	I0612 20:31:09.434180   32635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 20:31:09.434195   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:09.434433   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 20:31:09.434483   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:09.436351   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.436710   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.436740   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.436809   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:09.436953   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.437111   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:09.437271   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:31:09.518260   32635 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 20:31:09.522742   32635 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 20:31:09.522764   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 20:31:09.522825   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 20:31:09.522891   32635 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 20:31:09.522900   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /etc/ssl/certs/214442.pem
	I0612 20:31:09.522972   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 20:31:09.532751   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:31:09.561336   32635 start.go:296] duration metric: took 127.151212ms for postStartSetup
	I0612 20:31:09.561393   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetConfigRaw
	I0612 20:31:09.561980   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:31:09.564747   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.565107   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.565143   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.565359   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:31:09.565541   32635 start.go:128] duration metric: took 21.624480809s to createHost
	I0612 20:31:09.565563   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:09.567821   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.568161   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.568189   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.568426   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:09.568623   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.568808   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.568997   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:09.569213   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:09.569360   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:09.569370   32635 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 20:31:09.672998   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224269.643780635
	
	I0612 20:31:09.673041   32635 fix.go:216] guest clock: 1718224269.643780635
	I0612 20:31:09.673051   32635 fix.go:229] Guest: 2024-06-12 20:31:09.643780635 +0000 UTC Remote: 2024-06-12 20:31:09.565552821 +0000 UTC m=+208.626001239 (delta=78.227814ms)
	I0612 20:31:09.673074   32635 fix.go:200] guest clock delta is within tolerance: 78.227814ms
	I0612 20:31:09.673085   32635 start.go:83] releasing machines lock for "ha-844626-m03", held for 21.732129511s
	I0612 20:31:09.673109   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:09.673368   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:31:09.675736   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.676137   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.676163   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.678666   32635 out.go:177] * Found network options:
	I0612 20:31:09.680298   32635 out.go:177]   - NO_PROXY=192.168.39.196,192.168.39.108
	W0612 20:31:09.681788   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 20:31:09.681811   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 20:31:09.681823   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:09.682457   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:09.682640   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:09.682709   32635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 20:31:09.682751   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	W0612 20:31:09.683033   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 20:31:09.683056   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 20:31:09.683135   32635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 20:31:09.683155   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:09.685451   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.685788   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.685813   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.685887   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.685998   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:09.686219   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.686385   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:09.686449   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.686476   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.686567   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:09.686572   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:31:09.686689   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.686853   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:09.686993   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:31:09.920572   32635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 20:31:09.927596   32635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 20:31:09.927673   32635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 20:31:09.944808   32635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 20:31:09.944832   32635 start.go:494] detecting cgroup driver to use...
	I0612 20:31:09.944897   32635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 20:31:09.962865   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 20:31:09.979533   32635 docker.go:217] disabling cri-docker service (if available) ...
	I0612 20:31:09.979586   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 20:31:09.994509   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 20:31:10.010483   32635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 20:31:10.133393   32635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 20:31:10.309888   32635 docker.go:233] disabling docker service ...
	I0612 20:31:10.309964   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 20:31:10.327760   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 20:31:10.342124   32635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 20:31:10.472337   32635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 20:31:10.599790   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 20:31:10.615120   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 20:31:10.635337   32635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 20:31:10.635413   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.646919   32635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 20:31:10.646994   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.658588   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.670406   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.681737   32635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 20:31:10.694481   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.706838   32635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.725071   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.736339   32635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 20:31:10.746185   32635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 20:31:10.746232   32635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 20:31:10.759865   32635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 20:31:10.769901   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:31:10.891233   32635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 20:31:11.056415   32635 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 20:31:11.056500   32635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 20:31:11.061865   32635 start.go:562] Will wait 60s for crictl version
	I0612 20:31:11.061925   32635 ssh_runner.go:195] Run: which crictl
	I0612 20:31:11.065846   32635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 20:31:11.109896   32635 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 20:31:11.109972   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:31:11.139063   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:31:11.170476   32635 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 20:31:11.171902   32635 out.go:177]   - env NO_PROXY=192.168.39.196
	I0612 20:31:11.173186   32635 out.go:177]   - env NO_PROXY=192.168.39.196,192.168.39.108
	I0612 20:31:11.174409   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:31:11.177335   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:11.177685   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:11.177714   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:11.177934   32635 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 20:31:11.182119   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:31:11.195361   32635 mustload.go:65] Loading cluster: ha-844626
	I0612 20:31:11.195625   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:31:11.195944   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:31:11.195985   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:31:11.211009   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0612 20:31:11.211462   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:31:11.211950   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:31:11.211983   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:31:11.212314   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:31:11.212509   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:31:11.213918   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:31:11.214189   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:31:11.214221   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:31:11.229954   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41555
	I0612 20:31:11.230381   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:31:11.230898   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:31:11.230923   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:31:11.231263   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:31:11.231484   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:31:11.231654   32635 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626 for IP: 192.168.39.76
	I0612 20:31:11.231667   32635 certs.go:194] generating shared ca certs ...
	I0612 20:31:11.231689   32635 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:31:11.231860   32635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 20:31:11.231917   32635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 20:31:11.231931   32635 certs.go:256] generating profile certs ...
	I0612 20:31:11.232022   32635 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key
	I0612 20:31:11.232051   32635 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.a557c0af
	I0612 20:31:11.232079   32635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.a557c0af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.108 192.168.39.76 192.168.39.254]
	I0612 20:31:11.614498   32635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.a557c0af ...
	I0612 20:31:11.614528   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.a557c0af: {Name:mkb1a6c2268debdda293d42197a6a0500f29d2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:31:11.614689   32635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.a557c0af ...
	I0612 20:31:11.614700   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.a557c0af: {Name:mka8804460e33713c2d81479b819d02daff8d551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:31:11.614764   32635 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.a557c0af -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt
	I0612 20:31:11.614888   32635 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.a557c0af -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key
	I0612 20:31:11.614999   32635 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key
	I0612 20:31:11.615014   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 20:31:11.615027   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0612 20:31:11.615042   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 20:31:11.615052   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 20:31:11.615061   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 20:31:11.615069   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 20:31:11.615080   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 20:31:11.615088   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 20:31:11.615130   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 20:31:11.615157   32635 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 20:31:11.615164   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 20:31:11.615208   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 20:31:11.615239   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 20:31:11.615261   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 20:31:11.615328   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:31:11.615367   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:31:11.615382   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem -> /usr/share/ca-certificates/21444.pem
	I0612 20:31:11.615396   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /usr/share/ca-certificates/214442.pem
	I0612 20:31:11.615427   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:31:11.618513   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:31:11.618908   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:31:11.618934   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:31:11.619093   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:31:11.619309   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:31:11.619466   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:31:11.619607   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:31:11.695564   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0612 20:31:11.701753   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0612 20:31:11.714706   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0612 20:31:11.719619   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0612 20:31:11.731960   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0612 20:31:11.736481   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0612 20:31:11.746492   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0612 20:31:11.751454   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0612 20:31:11.763416   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0612 20:31:11.768272   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0612 20:31:11.779490   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0612 20:31:11.783822   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0612 20:31:11.795612   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 20:31:11.822551   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 20:31:11.848718   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 20:31:11.873366   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 20:31:11.897994   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0612 20:31:11.923909   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 20:31:11.950870   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 20:31:11.977087   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 20:31:12.002515   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 20:31:12.029115   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 20:31:12.056793   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 20:31:12.083280   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0612 20:31:12.101473   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0612 20:31:12.120030   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0612 20:31:12.138498   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0612 20:31:12.156187   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0612 20:31:12.176290   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0612 20:31:12.194850   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0612 20:31:12.212575   32635 ssh_runner.go:195] Run: openssl version
	I0612 20:31:12.218789   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 20:31:12.229576   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 20:31:12.234167   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 20:31:12.234218   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 20:31:12.241617   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 20:31:12.253094   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 20:31:12.264132   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 20:31:12.268860   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 20:31:12.268928   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 20:31:12.275059   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 20:31:12.286432   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 20:31:12.298994   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:31:12.303731   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:31:12.303775   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:31:12.310345   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 20:31:12.324673   32635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 20:31:12.329313   32635 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 20:31:12.329386   32635 kubeadm.go:928] updating node {m03 192.168.39.76 8443 v1.30.1 crio true true} ...
	I0612 20:31:12.329470   32635 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844626-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 20:31:12.329507   32635 kube-vip.go:115] generating kube-vip config ...
	I0612 20:31:12.329551   32635 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 20:31:12.350550   32635 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 20:31:12.350611   32635 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0612 20:31:12.350666   32635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 20:31:12.364956   32635 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0612 20:31:12.365009   32635 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0612 20:31:12.378412   32635 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0612 20:31:12.378441   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 20:31:12.378444   32635 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0612 20:31:12.378444   32635 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0612 20:31:12.378468   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 20:31:12.378503   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 20:31:12.378506   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:31:12.378530   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 20:31:12.397449   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0612 20:31:12.397461   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0612 20:31:12.397496   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0612 20:31:12.397503   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 20:31:12.397518   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0612 20:31:12.397578   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 20:31:12.419400   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0612 20:31:12.419445   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0612 20:31:13.323865   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0612 20:31:13.336087   32635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0612 20:31:13.354818   32635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 20:31:13.372711   32635 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0612 20:31:13.390571   32635 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0612 20:31:13.394598   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:31:13.407397   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:31:13.523697   32635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:31:13.541507   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:31:13.541969   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:31:13.542025   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:31:13.558836   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0612 20:31:13.559295   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:31:13.559976   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:31:13.560014   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:31:13.560375   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:31:13.560593   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:31:13.560770   32635 start.go:316] joinCluster: &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:31:13.560889   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0612 20:31:13.560909   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:31:13.564005   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:31:13.564508   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:31:13.564538   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:31:13.564700   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:31:13.564887   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:31:13.565045   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:31:13.565169   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:31:13.721863   32635 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:31:13.721910   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nxb23r.suyi54h7mrjhpsua --discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844626-m03 --control-plane --apiserver-advertise-address=192.168.39.76 --apiserver-bind-port=8443"
	I0612 20:31:37.421067   32635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nxb23r.suyi54h7mrjhpsua --discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844626-m03 --control-plane --apiserver-advertise-address=192.168.39.76 --apiserver-bind-port=8443": (23.699126175s)
	I0612 20:31:37.421105   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0612 20:31:37.987126   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844626-m03 minikube.k8s.io/updated_at=2024_06_12T20_31_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=ha-844626 minikube.k8s.io/primary=false
	I0612 20:31:38.124611   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844626-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0612 20:31:38.230403   32635 start.go:318] duration metric: took 24.669630386s to joinCluster
	I0612 20:31:38.230494   32635 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:31:38.231832   32635 out.go:177] * Verifying Kubernetes components...
	I0612 20:31:38.230765   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:31:38.233162   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:31:38.490906   32635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:31:38.526483   32635 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:31:38.526721   32635 kapi.go:59] client config for ha-844626: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt", KeyFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key", CAFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfb000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0612 20:31:38.526802   32635 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.196:8443
	I0612 20:31:38.527031   32635 node_ready.go:35] waiting up to 6m0s for node "ha-844626-m03" to be "Ready" ...
	I0612 20:31:38.527106   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:38.527116   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:38.527128   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:38.527145   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:38.530680   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:39.027416   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:39.027443   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:39.027454   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:39.027459   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:39.031781   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:39.528068   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:39.528094   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:39.528107   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:39.528111   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:39.531692   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:40.028125   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:40.028154   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:40.028161   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:40.028165   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:40.066785   32635 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0612 20:31:40.527322   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:40.527343   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:40.527351   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:40.527356   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:40.531343   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:40.531992   32635 node_ready.go:53] node "ha-844626-m03" has status "Ready":"False"
	I0612 20:31:41.027772   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:41.027801   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:41.027810   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:41.027815   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:41.031058   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:41.527326   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:41.527345   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:41.527353   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:41.527358   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:41.531197   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:42.028267   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:42.028294   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:42.028306   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:42.028311   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:42.032108   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:42.528068   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:42.528111   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:42.528126   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:42.528132   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:42.532649   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:42.534388   32635 node_ready.go:53] node "ha-844626-m03" has status "Ready":"False"
	I0612 20:31:43.028177   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:43.028202   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:43.028212   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:43.028220   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:43.031990   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:43.527286   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:43.527308   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:43.527316   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:43.527320   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:43.531214   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:44.027723   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:44.027811   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:44.027836   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:44.027851   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:44.031620   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:44.528016   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:44.528049   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:44.528061   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:44.528069   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:44.532435   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:45.027915   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:45.027938   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:45.027946   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:45.027950   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:45.032028   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:45.032714   32635 node_ready.go:53] node "ha-844626-m03" has status "Ready":"False"
	I0612 20:31:45.528113   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:45.528134   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:45.528142   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:45.528145   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:45.531604   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:46.027769   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:46.027795   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.027806   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.027812   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.031128   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:46.527669   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:46.527697   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.527709   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.527715   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.531779   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:46.532389   32635 node_ready.go:49] node "ha-844626-m03" has status "Ready":"True"
	I0612 20:31:46.532414   32635 node_ready.go:38] duration metric: took 8.005364342s for node "ha-844626-m03" to be "Ready" ...
	I0612 20:31:46.532428   32635 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:31:46.532495   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:31:46.532509   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.532519   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.532525   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.540139   32635 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 20:31:46.547810   32635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.547885   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bqzvn
	I0612 20:31:46.547893   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.547900   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.547905   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.550761   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.551415   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:46.551429   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.551435   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.551439   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.554217   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.554817   32635 pod_ready.go:92] pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:46.554840   32635 pod_ready.go:81] duration metric: took 7.00561ms for pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.554851   32635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.554913   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lxd6n
	I0612 20:31:46.554923   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.554933   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.554938   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.557496   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.558334   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:46.558348   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.558355   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.558359   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.560530   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.561126   32635 pod_ready.go:92] pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:46.561142   32635 pod_ready.go:81] duration metric: took 6.284183ms for pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.561149   32635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.561200   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626
	I0612 20:31:46.561208   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.561215   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.561218   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.563744   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.564320   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:46.564332   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.564338   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.564342   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.566807   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.567330   32635 pod_ready.go:92] pod "etcd-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:46.567345   32635 pod_ready.go:81] duration metric: took 6.19023ms for pod "etcd-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.567352   32635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.567402   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:31:46.567412   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.567423   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.567431   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.569759   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.570287   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:46.570302   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.570311   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.570316   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.572958   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.573397   32635 pod_ready.go:92] pod "etcd-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:46.573411   32635 pod_ready.go:81] duration metric: took 6.053668ms for pod "etcd-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.573419   32635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.727724   32635 request.go:629] Waited for 154.232817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:46.727789   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:46.727796   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.727806   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.727818   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.731086   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:46.928232   32635 request.go:629] Waited for 196.34772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:46.928290   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:46.928295   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.928304   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.928308   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.933132   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:47.128254   32635 request.go:629] Waited for 54.231002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:47.128320   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:47.128327   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:47.128339   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:47.128348   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:47.131582   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:47.328210   32635 request.go:629] Waited for 195.396597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:47.328302   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:47.328313   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:47.328323   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:47.328329   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:47.331707   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:47.574333   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:47.574356   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:47.574363   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:47.574367   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:47.577739   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:47.727817   32635 request.go:629] Waited for 149.225733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:47.727883   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:47.727890   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:47.727900   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:47.727906   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:47.731659   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:48.073962   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:48.073983   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:48.073990   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:48.073994   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:48.077082   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:48.128024   32635 request.go:629] Waited for 50.252672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:48.128192   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:48.128213   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:48.128225   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:48.128235   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:48.131715   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:48.574112   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:48.574133   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:48.574141   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:48.574145   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:48.577985   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:48.578520   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:48.578534   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:48.578541   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:48.578545   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:48.581487   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:48.581879   32635 pod_ready.go:102] pod "etcd-ha-844626-m03" in "kube-system" namespace has status "Ready":"False"
	I0612 20:31:49.074361   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:49.074385   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:49.074393   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:49.074398   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:49.077684   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:49.078478   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:49.078491   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:49.078498   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:49.078502   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:49.081408   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:49.573751   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:49.573774   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:49.573781   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:49.573786   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:49.577153   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:49.577967   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:49.577981   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:49.577988   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:49.577993   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:49.580712   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:50.074042   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:50.074064   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:50.074072   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:50.074076   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:50.077484   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:50.078347   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:50.078373   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:50.078381   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:50.078385   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:50.081252   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:50.574456   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:50.574478   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:50.574487   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:50.574490   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:50.578229   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:50.579085   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:50.579101   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:50.579108   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:50.579112   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:50.581689   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:50.582205   32635 pod_ready.go:102] pod "etcd-ha-844626-m03" in "kube-system" namespace has status "Ready":"False"
	I0612 20:31:51.073632   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:51.073656   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:51.073664   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:51.073668   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:51.077129   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:51.077690   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:51.077707   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:51.077717   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:51.077722   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:51.080226   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:51.574347   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:51.574367   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:51.574375   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:51.574380   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:51.578352   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:51.578938   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:51.578954   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:51.578963   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:51.578967   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:51.582800   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.073848   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:52.073876   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.073887   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.073891   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.077582   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.078469   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:52.078490   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.078501   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.078505   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.081525   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.082718   32635 pod_ready.go:92] pod "etcd-ha-844626-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:52.082740   32635 pod_ready.go:81] duration metric: took 5.509311762s for pod "etcd-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.082763   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.082830   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626
	I0612 20:31:52.082841   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.082851   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.082862   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.085626   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:52.086373   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:52.086389   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.086396   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.086399   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.089053   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:52.089541   32635 pod_ready.go:92] pod "kube-apiserver-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:52.089556   32635 pod_ready.go:81] duration metric: took 6.782641ms for pod "kube-apiserver-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.089564   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.089611   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626-m02
	I0612 20:31:52.089618   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.089625   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.089631   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.093324   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.128316   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:52.128342   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.128354   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.128362   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.132258   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.132723   32635 pod_ready.go:92] pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:52.132740   32635 pod_ready.go:81] duration metric: took 43.169177ms for pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.132748   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.327995   32635 request.go:629] Waited for 195.172189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626-m03
	I0612 20:31:52.328054   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626-m03
	I0612 20:31:52.328060   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.328069   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.328079   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.330878   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:52.527844   32635 request.go:629] Waited for 196.286481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:52.527915   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:52.527924   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.527934   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.527941   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.530973   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.531519   32635 pod_ready.go:92] pod "kube-apiserver-ha-844626-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:52.531545   32635 pod_ready.go:81] duration metric: took 398.790061ms for pod "kube-apiserver-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.531558   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.728581   32635 request.go:629] Waited for 196.949195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626
	I0612 20:31:52.728634   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626
	I0612 20:31:52.728639   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.728646   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.728649   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.731995   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.928138   32635 request.go:629] Waited for 195.346578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:52.928211   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:52.928216   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.928224   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.928229   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.931855   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.932808   32635 pod_ready.go:92] pod "kube-controller-manager-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:52.932827   32635 pod_ready.go:81] duration metric: took 401.260741ms for pod "kube-controller-manager-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.932835   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:53.128301   32635 request.go:629] Waited for 195.41004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:31:53.128390   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:31:53.128398   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:53.128407   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:53.128412   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:53.132328   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:53.328286   32635 request.go:629] Waited for 195.374363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:53.328341   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:53.328348   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:53.328355   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:53.328361   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:53.332028   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:53.332717   32635 pod_ready.go:92] pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:53.332738   32635 pod_ready.go:81] duration metric: took 399.896251ms for pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:53.332747   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:53.527674   32635 request.go:629] Waited for 194.858927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m03
	I0612 20:31:53.527754   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m03
	I0612 20:31:53.527764   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:53.527770   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:53.527776   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:53.531048   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:53.728358   32635 request.go:629] Waited for 196.372417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:53.728412   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:53.728417   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:53.728425   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:53.728430   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:53.732421   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:53.734734   32635 pod_ready.go:92] pod "kube-controller-manager-ha-844626-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:53.734760   32635 pod_ready.go:81] duration metric: took 402.005437ms for pod "kube-controller-manager-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:53.734777   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2clg8" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:53.927774   32635 request.go:629] Waited for 192.919944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2clg8
	I0612 20:31:53.927831   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2clg8
	I0612 20:31:53.927836   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:53.927843   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:53.927849   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:53.931507   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:54.128530   32635 request.go:629] Waited for 196.27683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:54.128599   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:54.128606   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:54.128616   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:54.128622   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:54.134543   32635 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 20:31:54.135601   32635 pod_ready.go:92] pod "kube-proxy-2clg8" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:54.135630   32635 pod_ready.go:81] duration metric: took 400.844763ms for pod "kube-proxy-2clg8" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:54.135644   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-69ctp" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:54.328620   32635 request.go:629] Waited for 192.902619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-69ctp
	I0612 20:31:54.328686   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-69ctp
	I0612 20:31:54.328693   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:54.328701   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:54.328705   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:54.332062   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:54.528054   32635 request.go:629] Waited for 195.328764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:54.528119   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:54.528126   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:54.528133   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:54.528141   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:54.531837   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:54.532356   32635 pod_ready.go:92] pod "kube-proxy-69ctp" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:54.532375   32635 pod_ready.go:81] duration metric: took 396.724238ms for pod "kube-proxy-69ctp" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:54.532384   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7ct8" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:54.728378   32635 request.go:629] Waited for 195.936765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f7ct8
	I0612 20:31:54.728450   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f7ct8
	I0612 20:31:54.728458   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:54.728465   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:54.728472   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:54.731856   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:54.927763   32635 request.go:629] Waited for 195.286396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:54.927865   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:54.927880   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:54.927889   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:54.927899   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:54.931389   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:54.931956   32635 pod_ready.go:92] pod "kube-proxy-f7ct8" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:54.931976   32635 pod_ready.go:81] duration metric: took 399.586497ms for pod "kube-proxy-f7ct8" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:54.931985   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:55.128039   32635 request.go:629] Waited for 195.996524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626
	I0612 20:31:55.128099   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626
	I0612 20:31:55.128105   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:55.128122   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:55.128129   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:55.131689   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:55.328341   32635 request.go:629] Waited for 195.800766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:55.328431   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:55.328443   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:55.328453   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:55.328460   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:55.332328   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:55.333051   32635 pod_ready.go:92] pod "kube-scheduler-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:55.333070   32635 pod_ready.go:81] duration metric: took 401.077538ms for pod "kube-scheduler-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:55.333082   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:55.528131   32635 request.go:629] Waited for 194.985749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m02
	I0612 20:31:55.528203   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m02
	I0612 20:31:55.528208   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:55.528215   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:55.528219   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:55.532123   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:55.728034   32635 request.go:629] Waited for 195.369687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:55.728095   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:55.728102   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:55.728115   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:55.728126   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:55.731299   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:55.731949   32635 pod_ready.go:92] pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:55.731969   32635 pod_ready.go:81] duration metric: took 398.877951ms for pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:55.731978   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:55.928038   32635 request.go:629] Waited for 195.972809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m03
	I0612 20:31:55.928129   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m03
	I0612 20:31:55.928141   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:55.928153   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:55.928164   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:55.931701   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:56.128003   32635 request.go:629] Waited for 195.339584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:56.128092   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:56.128106   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.128116   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.128125   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.131663   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:56.132326   32635 pod_ready.go:92] pod "kube-scheduler-ha-844626-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:56.132350   32635 pod_ready.go:81] duration metric: took 400.363545ms for pod "kube-scheduler-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:56.132365   32635 pod_ready.go:38] duration metric: took 9.599925264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:31:56.132387   32635 api_server.go:52] waiting for apiserver process to appear ...
	I0612 20:31:56.132450   32635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:31:56.148716   32635 api_server.go:72] duration metric: took 17.918187765s to wait for apiserver process to appear ...
	I0612 20:31:56.148747   32635 api_server.go:88] waiting for apiserver healthz status ...
	I0612 20:31:56.148767   32635 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0612 20:31:56.155111   32635 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0612 20:31:56.155198   32635 round_trippers.go:463] GET https://192.168.39.196:8443/version
	I0612 20:31:56.155208   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.155216   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.155219   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.155969   32635 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 20:31:56.156023   32635 api_server.go:141] control plane version: v1.30.1
	I0612 20:31:56.156036   32635 api_server.go:131] duration metric: took 7.282834ms to wait for apiserver health ...
	I0612 20:31:56.156044   32635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 20:31:56.328332   32635 request.go:629] Waited for 172.226629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:31:56.328397   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:31:56.328402   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.328411   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.328422   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.334778   32635 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 20:31:56.341135   32635 system_pods.go:59] 24 kube-system pods found
	I0612 20:31:56.341160   32635 system_pods.go:61] "coredns-7db6d8ff4d-bqzvn" [b22b3ba0-1a59-4066-9db5-380986d73dca] Running
	I0612 20:31:56.341164   32635 system_pods.go:61] "coredns-7db6d8ff4d-lxd6n" [65d25d78-6fa7-4dc7-9cf2-e2fac796f194] Running
	I0612 20:31:56.341168   32635 system_pods.go:61] "etcd-ha-844626" [73812d48-addc-4957-ae24-6bbad2f5fbaa] Running
	I0612 20:31:56.341171   32635 system_pods.go:61] "etcd-ha-844626-m02" [57d89f35-94d4-4b64-a648-c440eaddef2a] Running
	I0612 20:31:56.341174   32635 system_pods.go:61] "etcd-ha-844626-m03" [663349bf-770f-4ea2-acf1-9fef6dd30299] Running
	I0612 20:31:56.341177   32635 system_pods.go:61] "kindnet-8hdxz" [26fbb25f-70b2-41bc-809a-0f8ba75a8432] Running
	I0612 20:31:56.341180   32635 system_pods.go:61] "kindnet-fz6bl" [fb946e9f-19cd-4a9f-8585-88118c840922] Running
	I0612 20:31:56.341183   32635 system_pods.go:61] "kindnet-mthnq" [49950bb0-368d-4239-ae93-04c980a8b531] Running
	I0612 20:31:56.341186   32635 system_pods.go:61] "kube-apiserver-ha-844626" [0e8ba551-e997-453a-b76f-a090a441bce4] Running
	I0612 20:31:56.341189   32635 system_pods.go:61] "kube-apiserver-ha-844626-m02" [eeaf9c1b-e433-4de6-b6e8-4c33cd467a42] Running
	I0612 20:31:56.341192   32635 system_pods.go:61] "kube-apiserver-ha-844626-m03" [5f530a0a-cc60-4724-b3fa-4525884da5e8] Running
	I0612 20:31:56.341195   32635 system_pods.go:61] "kube-controller-manager-ha-844626" [9bca7a0a-74d1-4b9c-9915-2cf6a4eb5e52] Running
	I0612 20:31:56.341198   32635 system_pods.go:61] "kube-controller-manager-ha-844626-m02" [6e26986e-06e4-4e85-b83d-57c2254732f0] Running
	I0612 20:31:56.341201   32635 system_pods.go:61] "kube-controller-manager-ha-844626-m03" [0df52c5e-a186-4b14-a5d4-bb6d5190bac0] Running
	I0612 20:31:56.341204   32635 system_pods.go:61] "kube-proxy-2clg8" [9e4dd97c-794a-4f29-bc12-f7892e5fcfd4] Running
	I0612 20:31:56.341208   32635 system_pods.go:61] "kube-proxy-69ctp" [c66149e8-2a69-4f1f-9ddc-5e272204e6f5] Running
	I0612 20:31:56.341210   32635 system_pods.go:61] "kube-proxy-f7ct8" [4bf3e7e1-68e8-4d0d-980b-cb5055e10365] Running
	I0612 20:31:56.341213   32635 system_pods.go:61] "kube-scheduler-ha-844626" [49238394-1429-40ce-8d74-290b1743547f] Running
	I0612 20:31:56.341216   32635 system_pods.go:61] "kube-scheduler-ha-844626-m02" [488c0960-8abb-40d1-a92e-bd4f61b5973b] Running
	I0612 20:31:56.341219   32635 system_pods.go:61] "kube-scheduler-ha-844626-m03" [2ec2f277-0a72-4937-8591-28ca2822e98d] Running
	I0612 20:31:56.341222   32635 system_pods.go:61] "kube-vip-ha-844626" [654fd183-21b0-4df5-b557-ed676c5ecb71] Running
	I0612 20:31:56.341227   32635 system_pods.go:61] "kube-vip-ha-844626-m02" [c7785d9d-bfc0-4f65-b853-36a7f2ba791b] Running
	I0612 20:31:56.341234   32635 system_pods.go:61] "kube-vip-ha-844626-m03" [4207cddd-6eb3-40c6-be2c-ac895964aa0d] Running
	I0612 20:31:56.341239   32635 system_pods.go:61] "storage-provisioner" [d94c16d7-da82-41e3-82fe-83ed6e581f69] Running
	I0612 20:31:56.341247   32635 system_pods.go:74] duration metric: took 185.195643ms to wait for pod list to return data ...
	I0612 20:31:56.341260   32635 default_sa.go:34] waiting for default service account to be created ...
	I0612 20:31:56.528660   32635 request.go:629] Waited for 187.33133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0612 20:31:56.528711   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0612 20:31:56.528717   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.528725   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.528732   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.532203   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:56.532352   32635 default_sa.go:45] found service account: "default"
	I0612 20:31:56.532374   32635 default_sa.go:55] duration metric: took 191.105869ms for default service account to be created ...
	I0612 20:31:56.532384   32635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 20:31:56.727726   32635 request.go:629] Waited for 195.277738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:31:56.727804   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:31:56.727816   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.727826   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.727836   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.737456   32635 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 20:31:56.744345   32635 system_pods.go:86] 24 kube-system pods found
	I0612 20:31:56.744371   32635 system_pods.go:89] "coredns-7db6d8ff4d-bqzvn" [b22b3ba0-1a59-4066-9db5-380986d73dca] Running
	I0612 20:31:56.744378   32635 system_pods.go:89] "coredns-7db6d8ff4d-lxd6n" [65d25d78-6fa7-4dc7-9cf2-e2fac796f194] Running
	I0612 20:31:56.744382   32635 system_pods.go:89] "etcd-ha-844626" [73812d48-addc-4957-ae24-6bbad2f5fbaa] Running
	I0612 20:31:56.744388   32635 system_pods.go:89] "etcd-ha-844626-m02" [57d89f35-94d4-4b64-a648-c440eaddef2a] Running
	I0612 20:31:56.744395   32635 system_pods.go:89] "etcd-ha-844626-m03" [663349bf-770f-4ea2-acf1-9fef6dd30299] Running
	I0612 20:31:56.744401   32635 system_pods.go:89] "kindnet-8hdxz" [26fbb25f-70b2-41bc-809a-0f8ba75a8432] Running
	I0612 20:31:56.744411   32635 system_pods.go:89] "kindnet-fz6bl" [fb946e9f-19cd-4a9f-8585-88118c840922] Running
	I0612 20:31:56.744421   32635 system_pods.go:89] "kindnet-mthnq" [49950bb0-368d-4239-ae93-04c980a8b531] Running
	I0612 20:31:56.744427   32635 system_pods.go:89] "kube-apiserver-ha-844626" [0e8ba551-e997-453a-b76f-a090a441bce4] Running
	I0612 20:31:56.744436   32635 system_pods.go:89] "kube-apiserver-ha-844626-m02" [eeaf9c1b-e433-4de6-b6e8-4c33cd467a42] Running
	I0612 20:31:56.744445   32635 system_pods.go:89] "kube-apiserver-ha-844626-m03" [5f530a0a-cc60-4724-b3fa-4525884da5e8] Running
	I0612 20:31:56.744450   32635 system_pods.go:89] "kube-controller-manager-ha-844626" [9bca7a0a-74d1-4b9c-9915-2cf6a4eb5e52] Running
	I0612 20:31:56.744456   32635 system_pods.go:89] "kube-controller-manager-ha-844626-m02" [6e26986e-06e4-4e85-b83d-57c2254732f0] Running
	I0612 20:31:56.744461   32635 system_pods.go:89] "kube-controller-manager-ha-844626-m03" [0df52c5e-a186-4b14-a5d4-bb6d5190bac0] Running
	I0612 20:31:56.744468   32635 system_pods.go:89] "kube-proxy-2clg8" [9e4dd97c-794a-4f29-bc12-f7892e5fcfd4] Running
	I0612 20:31:56.744472   32635 system_pods.go:89] "kube-proxy-69ctp" [c66149e8-2a69-4f1f-9ddc-5e272204e6f5] Running
	I0612 20:31:56.744478   32635 system_pods.go:89] "kube-proxy-f7ct8" [4bf3e7e1-68e8-4d0d-980b-cb5055e10365] Running
	I0612 20:31:56.744482   32635 system_pods.go:89] "kube-scheduler-ha-844626" [49238394-1429-40ce-8d74-290b1743547f] Running
	I0612 20:31:56.744489   32635 system_pods.go:89] "kube-scheduler-ha-844626-m02" [488c0960-8abb-40d1-a92e-bd4f61b5973b] Running
	I0612 20:31:56.744493   32635 system_pods.go:89] "kube-scheduler-ha-844626-m03" [2ec2f277-0a72-4937-8591-28ca2822e98d] Running
	I0612 20:31:56.744499   32635 system_pods.go:89] "kube-vip-ha-844626" [654fd183-21b0-4df5-b557-ed676c5ecb71] Running
	I0612 20:31:56.744504   32635 system_pods.go:89] "kube-vip-ha-844626-m02" [c7785d9d-bfc0-4f65-b853-36a7f2ba791b] Running
	I0612 20:31:56.744510   32635 system_pods.go:89] "kube-vip-ha-844626-m03" [4207cddd-6eb3-40c6-be2c-ac895964aa0d] Running
	I0612 20:31:56.744519   32635 system_pods.go:89] "storage-provisioner" [d94c16d7-da82-41e3-82fe-83ed6e581f69] Running
	I0612 20:31:56.744529   32635 system_pods.go:126] duration metric: took 212.137812ms to wait for k8s-apps to be running ...
	I0612 20:31:56.744541   32635 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 20:31:56.744588   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:31:56.760798   32635 system_svc.go:56] duration metric: took 16.250874ms WaitForService to wait for kubelet
	I0612 20:31:56.760825   32635 kubeadm.go:576] duration metric: took 18.530299856s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:31:56.760848   32635 node_conditions.go:102] verifying NodePressure condition ...
	I0612 20:31:56.928296   32635 request.go:629] Waited for 167.369083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0612 20:31:56.928397   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0612 20:31:56.928408   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.928423   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.928432   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.935911   32635 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 20:31:56.937610   32635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:31:56.937637   32635 node_conditions.go:123] node cpu capacity is 2
	I0612 20:31:56.937653   32635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:31:56.937660   32635 node_conditions.go:123] node cpu capacity is 2
	I0612 20:31:56.937665   32635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:31:56.937670   32635 node_conditions.go:123] node cpu capacity is 2
	I0612 20:31:56.937676   32635 node_conditions.go:105] duration metric: took 176.822246ms to run NodePressure ...
	I0612 20:31:56.937692   32635 start.go:240] waiting for startup goroutines ...
	I0612 20:31:56.937720   32635 start.go:254] writing updated cluster config ...
	I0612 20:31:56.938125   32635 ssh_runner.go:195] Run: rm -f paused
	I0612 20:31:56.991753   32635 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 20:31:56.993807   32635 out.go:177] * Done! kubectl is now configured to use "ha-844626" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.348736094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718224526348709023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d57bab32-bc1a-4b68-a7bb-894920c058ac name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.349130027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0cca443a-a275-45a7-a367-62c47966d80c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.349251912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0cca443a-a275-45a7-a367-62c47966d80c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.351430549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224321149787168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119278046658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119207347720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a8f38c6abf70e91806516f6efb3aec847188dad6c91439ca9660d95029a3e6,PodSandboxId:f9dadbeb4bc2e8a16844613b21df3ec41cfde1ec2af14a253acf83cca3a30c77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718224119120797950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c30a5477508feea3fbb6cfdecd135d22a50b2e156bd4473175e26702f5c416d0,PodSandboxId:129f4ebc50a11b61c1dd83775ccaebc4b91dbea2042983198fd5117bfc252683,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718224117627449734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171822411
3786698401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd52024c12a2b486d52b8f6803360b3172fb54227b17758bbd09a2e22dc32163,PodSandboxId:b103684a1a841cc799e6cf1a92d9d837be2f300bbf7cc35bdb47f898a491a851,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17182240970
53063306,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a13e0b5fc3f27bb690c5d127326271,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224093469563326,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224093415472008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a,PodSandboxId:52f253395536d18114f5cc470daa0964b165f0d0ea899e8c3c61cd8cc9006f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224093393756393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf,PodSandboxId:4e98354eb40b14c0b715e4b40bf90e912f8896ef232ef8071df238b51fcc9a90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224093340732616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0cca443a-a275-45a7-a367-62c47966d80c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.398860963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b344d1e-1d7f-40a4-8e66-2ebcf110e0c3 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.398951169Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b344d1e-1d7f-40a4-8e66-2ebcf110e0c3 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.400135499Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1e4365f-a236-4db1-a96b-4bd4ffd99016 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.400720986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718224526400694816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1e4365f-a236-4db1-a96b-4bd4ffd99016 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.401249777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ae679a7-3ed6-484d-8df6-c7f8f370bae6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.401342065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ae679a7-3ed6-484d-8df6-c7f8f370bae6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.401607191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224321149787168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119278046658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119207347720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a8f38c6abf70e91806516f6efb3aec847188dad6c91439ca9660d95029a3e6,PodSandboxId:f9dadbeb4bc2e8a16844613b21df3ec41cfde1ec2af14a253acf83cca3a30c77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718224119120797950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c30a5477508feea3fbb6cfdecd135d22a50b2e156bd4473175e26702f5c416d0,PodSandboxId:129f4ebc50a11b61c1dd83775ccaebc4b91dbea2042983198fd5117bfc252683,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718224117627449734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171822411
3786698401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd52024c12a2b486d52b8f6803360b3172fb54227b17758bbd09a2e22dc32163,PodSandboxId:b103684a1a841cc799e6cf1a92d9d837be2f300bbf7cc35bdb47f898a491a851,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17182240970
53063306,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a13e0b5fc3f27bb690c5d127326271,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224093469563326,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224093415472008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a,PodSandboxId:52f253395536d18114f5cc470daa0964b165f0d0ea899e8c3c61cd8cc9006f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224093393756393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf,PodSandboxId:4e98354eb40b14c0b715e4b40bf90e912f8896ef232ef8071df238b51fcc9a90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224093340732616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ae679a7-3ed6-484d-8df6-c7f8f370bae6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.441917800Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f0342d4-dc90-45a6-be2c-36bd50a169fc name=/runtime.v1.RuntimeService/Version
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.441989742Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f0342d4-dc90-45a6-be2c-36bd50a169fc name=/runtime.v1.RuntimeService/Version
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.443064590Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f84db456-96ed-4c66-94b9-51a67f1c56a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.443710457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718224526443685391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f84db456-96ed-4c66-94b9-51a67f1c56a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.444878325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a4d6b1f-7af4-44b1-9483-bd87d9ae9638 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.444978504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a4d6b1f-7af4-44b1-9483-bd87d9ae9638 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.445301779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224321149787168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119278046658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119207347720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a8f38c6abf70e91806516f6efb3aec847188dad6c91439ca9660d95029a3e6,PodSandboxId:f9dadbeb4bc2e8a16844613b21df3ec41cfde1ec2af14a253acf83cca3a30c77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718224119120797950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c30a5477508feea3fbb6cfdecd135d22a50b2e156bd4473175e26702f5c416d0,PodSandboxId:129f4ebc50a11b61c1dd83775ccaebc4b91dbea2042983198fd5117bfc252683,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718224117627449734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171822411
3786698401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd52024c12a2b486d52b8f6803360b3172fb54227b17758bbd09a2e22dc32163,PodSandboxId:b103684a1a841cc799e6cf1a92d9d837be2f300bbf7cc35bdb47f898a491a851,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17182240970
53063306,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a13e0b5fc3f27bb690c5d127326271,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224093469563326,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224093415472008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a,PodSandboxId:52f253395536d18114f5cc470daa0964b165f0d0ea899e8c3c61cd8cc9006f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224093393756393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf,PodSandboxId:4e98354eb40b14c0b715e4b40bf90e912f8896ef232ef8071df238b51fcc9a90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224093340732616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a4d6b1f-7af4-44b1-9483-bd87d9ae9638 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.484020979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75330959-3c1b-46a8-b58b-8590db71c2f8 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.484090715Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75330959-3c1b-46a8-b58b-8590db71c2f8 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.485103244Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed415768-badb-4ae0-bd76-b9bb29327a26 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.485775845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718224526485752456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed415768-badb-4ae0-bd76-b9bb29327a26 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.486521039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b64d55e4-6d26-4d3e-9f39-f03da4ccfb62 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.486591553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b64d55e4-6d26-4d3e-9f39-f03da4ccfb62 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:35:26 ha-844626 crio[683]: time="2024-06-12 20:35:26.486942925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224321149787168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119278046658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119207347720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a8f38c6abf70e91806516f6efb3aec847188dad6c91439ca9660d95029a3e6,PodSandboxId:f9dadbeb4bc2e8a16844613b21df3ec41cfde1ec2af14a253acf83cca3a30c77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718224119120797950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c30a5477508feea3fbb6cfdecd135d22a50b2e156bd4473175e26702f5c416d0,PodSandboxId:129f4ebc50a11b61c1dd83775ccaebc4b91dbea2042983198fd5117bfc252683,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718224117627449734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171822411
3786698401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd52024c12a2b486d52b8f6803360b3172fb54227b17758bbd09a2e22dc32163,PodSandboxId:b103684a1a841cc799e6cf1a92d9d837be2f300bbf7cc35bdb47f898a491a851,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17182240970
53063306,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a13e0b5fc3f27bb690c5d127326271,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224093469563326,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224093415472008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a,PodSandboxId:52f253395536d18114f5cc470daa0964b165f0d0ea899e8c3c61cd8cc9006f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224093393756393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf,PodSandboxId:4e98354eb40b14c0b715e4b40bf90e912f8896ef232ef8071df238b51fcc9a90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224093340732616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b64d55e4-6d26-4d3e-9f39-f03da4ccfb62 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ccf4b3ead47f7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   61e1e7d7b51fb       busybox-fc5497c4f-bdzsx
	5eb15a71cbeec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   43f0b5e0d015c       coredns-7db6d8ff4d-lxd6n
	6f896bc7211fd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   5dcd51ad312e1       coredns-7db6d8ff4d-bqzvn
	63a8f38c6abf7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   f9dadbeb4bc2e       storage-provisioner
	c30a5477508fe       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    6 minutes ago       Running             kindnet-cni               0                   129f4ebc50a11       kindnet-mthnq
	b028950fdf37b       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      6 minutes ago       Running             kube-proxy                0                   4e233e0bc3bb7       kube-proxy-69ctp
	cd52024c12a2b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   b103684a1a841       kube-vip-ha-844626
	6255c7db8bcf2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   b0297d465b251       etcd-ha-844626
	223d45eb38f84       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago       Running             kube-scheduler            0                   5512a35ec1cf1       kube-scheduler-ha-844626
	41bc9389144d3       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago       Running             kube-apiserver            0                   52f253395536d       kube-apiserver-ha-844626
	1ac304305cc39       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago       Running             kube-controller-manager   0                   4e98354eb40b1       kube-controller-manager-ha-844626
	
	
	==> coredns [5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0] <==
	[INFO] 10.244.1.2:48442 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001875956s
	[INFO] 10.244.1.2:48528 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000293437s
	[INFO] 10.244.1.2:41648 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125681s
	[INFO] 10.244.1.2:54972 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113166s
	[INFO] 10.244.1.2:41309 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085981s
	[INFO] 10.244.2.2:46088 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001813687s
	[INFO] 10.244.2.2:41288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099916s
	[INFO] 10.244.2.2:50111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001353864s
	[INFO] 10.244.2.2:58718 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071988s
	[INFO] 10.244.2.2:53104 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063402s
	[INFO] 10.244.2.2:33504 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000200272s
	[INFO] 10.244.0.4:57974 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068404s
	[INFO] 10.244.1.2:36180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000396478s
	[INFO] 10.244.1.2:44974 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143897s
	[INFO] 10.244.2.2:45916 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153283s
	[INFO] 10.244.2.2:54255 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107674s
	[INFO] 10.244.2.2:37490 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120001s
	[INFO] 10.244.2.2:35084 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008018s
	[INFO] 10.244.0.4:39477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000273278s
	[INFO] 10.244.1.2:48205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158614s
	[INFO] 10.244.1.2:59881 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158202s
	[INFO] 10.244.1.2:35567 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000472197s
	[INFO] 10.244.1.2:56490 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000211826s
	[INFO] 10.244.2.2:48246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156952s
	[INFO] 10.244.2.2:43466 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117313s
	
	
	==> coredns [6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff] <==
	[INFO] 10.244.0.4:35966 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.004427903s
	[INFO] 10.244.1.2:42207 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183564s
	[INFO] 10.244.2.2:40381 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000155821s
	[INFO] 10.244.2.2:38862 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000101136s
	[INFO] 10.244.2.2:44086 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001894727s
	[INFO] 10.244.0.4:56242 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009694s
	[INFO] 10.244.0.4:50224 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170892s
	[INFO] 10.244.0.4:50347 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139284s
	[INFO] 10.244.0.4:43967 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.022155051s
	[INFO] 10.244.0.4:34878 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000206851s
	[INFO] 10.244.1.2:46797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00034142s
	[INFO] 10.244.1.2:43369 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000248825s
	[INFO] 10.244.1.2:56650 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001632154s
	[INFO] 10.244.2.2:38141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172487s
	[INFO] 10.244.2.2:60906 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158767s
	[INFO] 10.244.0.4:40480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117274s
	[INFO] 10.244.0.4:47149 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000771s
	[INFO] 10.244.0.4:56834 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000323893s
	[INFO] 10.244.1.2:44664 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146272s
	[INFO] 10.244.1.2:47748 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110683s
	[INFO] 10.244.0.4:39510 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159779s
	[INFO] 10.244.0.4:49210 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125351s
	[INFO] 10.244.0.4:48326 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000179032s
	[INFO] 10.244.2.2:38296 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150584s
	[INFO] 10.244.2.2:58162 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116767s
	
	
	==> describe nodes <==
	Name:               ha-844626
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T20_28_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:35:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:32:24 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:32:24 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:32:24 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:32:24 +0000   Wed, 12 Jun 2024 20:28:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    ha-844626
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca8d79507bbc4f44bf947af92833058f
	  System UUID:                ca8d7950-7bbc-4f44-bf94-7af92833058f
	  Boot ID:                    da0f0a2a-5126-4bca-9f1f-744b30254ff4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bdzsx              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 coredns-7db6d8ff4d-bqzvn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m53s
	  kube-system                 coredns-7db6d8ff4d-lxd6n             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m53s
	  kube-system                 etcd-ha-844626                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m7s
	  kube-system                 kindnet-mthnq                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m53s
	  kube-system                 kube-apiserver-ha-844626             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	  kube-system                 kube-controller-manager-ha-844626    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	  kube-system                 kube-proxy-69ctp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m53s
	  kube-system                 kube-scheduler-ha-844626             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	  kube-system                 kube-vip-ha-844626                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m52s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m14s (x7 over 7m14s)  kubelet          Node ha-844626 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m14s (x8 over 7m14s)  kubelet          Node ha-844626 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m14s (x8 over 7m14s)  kubelet          Node ha-844626 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m7s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m7s                   kubelet          Node ha-844626 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m7s                   kubelet          Node ha-844626 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m7s                   kubelet          Node ha-844626 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m54s                  node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal  NodeReady                6m48s                  kubelet          Node ha-844626 status is now: NodeReady
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal  RegisteredNode           3m34s                  node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	
	
	Name:               ha-844626-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T20_30_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:30:25 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:32:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 12 Jun 2024 20:32:27 +0000   Wed, 12 Jun 2024 20:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 12 Jun 2024 20:32:27 +0000   Wed, 12 Jun 2024 20:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 12 Jun 2024 20:32:27 +0000   Wed, 12 Jun 2024 20:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 12 Jun 2024 20:32:27 +0000   Wed, 12 Jun 2024 20:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    ha-844626-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc34ec9a17c449479c11e07f628f1a6e
	  System UUID:                fc34ec9a-17c4-4947-9c11-e07f628f1a6e
	  Boot ID:                    3b223b75-c640-40c2-9cb9-0319e4770144
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bh59q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-844626-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m56s
	  kube-system                 kindnet-fz6bl                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m1s
	  kube-system                 kube-apiserver-ha-844626-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-controller-manager-ha-844626-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-proxy-f7ct8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-scheduler-ha-844626-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-vip-ha-844626-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m1s (x8 over 5m1s)  kubelet          Node ha-844626-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m1s (x8 over 5m1s)  kubelet          Node ha-844626-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m1s (x7 over 5m1s)  kubelet          Node ha-844626-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m59s                node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           4m44s                node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           3m34s                node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  NodeNotReady             104s                 node-controller  Node ha-844626-m02 status is now: NodeNotReady
	
	
	Name:               ha-844626-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T20_31_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:31:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:35:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:32:05 +0000   Wed, 12 Jun 2024 20:31:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:32:05 +0000   Wed, 12 Jun 2024 20:31:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:32:05 +0000   Wed, 12 Jun 2024 20:31:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:32:05 +0000   Wed, 12 Jun 2024 20:31:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-844626-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e6bf394d9ac40219e8a5de4a5d52b0f
	  System UUID:                1e6bf394-d9ac-4021-9e8a-5de4a5d52b0f
	  Boot ID:                    ef8801d4-4f53-4501-8d8f-1febd29ecc5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dhw8h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-844626-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m50s
	  kube-system                 kindnet-8hdxz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-apiserver-ha-844626-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-controller-manager-ha-844626-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-proxy-2clg8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-scheduler-ha-844626-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-vip-ha-844626-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node ha-844626-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node ha-844626-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m52s)  kubelet          Node ha-844626-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	  Normal  RegisteredNode           3m34s                  node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	
	
	Name:               ha-844626-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T20_32_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:35:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:33:05 +0000   Wed, 12 Jun 2024 20:32:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:33:05 +0000   Wed, 12 Jun 2024 20:32:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:33:05 +0000   Wed, 12 Jun 2024 20:32:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:33:05 +0000   Wed, 12 Jun 2024 20:32:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    ha-844626-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 76e9ad048f36466a8cb780349dbd0fce
	  System UUID:                76e9ad04-8f36-466a-8cb7-80349dbd0fce
	  Boot ID:                    9b195a09-7c2c-4edb-aee8-31e13eaba894
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pwr4p       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m51s
	  kube-system                 kube-proxy-dbk2r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m51s (x3 over 2m52s)  kubelet          Node ha-844626-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s (x3 over 2m52s)  kubelet          Node ha-844626-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x3 over 2m52s)  kubelet          Node ha-844626-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-844626-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun12 20:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051526] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040422] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.527122] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.467419] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.568206] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun12 20:28] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.063983] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073055] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.159207] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.152158] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.286482] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.221083] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.069110] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.063782] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.293152] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.089558] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.977157] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.420198] kauditd_printk_skb: 38 callbacks suppressed
	[Jun12 20:30] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d] <==
	{"level":"warn","ts":"2024-06-12T20:35:26.376538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.764799Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.769275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.776502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.777583Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.785014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.795769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.803903Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.809008Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.812414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.822635Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.830191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.837679Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.841462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.845836Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.858477Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.861003Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.867587Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.875137Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.876106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.878871Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.88232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.887531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.895838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:35:26.90234Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:35:26 up 7 min,  0 users,  load average: 0.41, 0.31, 0.16
	Linux ha-844626 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c30a5477508feea3fbb6cfdecd135d22a50b2e156bd4473175e26702f5c416d0] <==
	I0612 20:34:48.952718       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:34:58.967784       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:34:58.967832       1 main.go:227] handling current node
	I0612 20:34:58.967846       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:34:58.967853       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:34:58.968012       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0612 20:34:58.968045       1 main.go:250] Node ha-844626-m03 has CIDR [10.244.2.0/24] 
	I0612 20:34:58.968149       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:34:58.968179       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:35:08.980728       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:35:08.980828       1 main.go:227] handling current node
	I0612 20:35:08.980854       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:35:08.980872       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:35:08.981038       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0612 20:35:08.981061       1 main.go:250] Node ha-844626-m03 has CIDR [10.244.2.0/24] 
	I0612 20:35:08.981124       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:35:08.981152       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:35:18.988122       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:35:18.988170       1 main.go:227] handling current node
	I0612 20:35:18.988190       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:35:18.988237       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:35:18.988352       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0612 20:35:18.988376       1 main.go:250] Node ha-844626-m03 has CIDR [10.244.2.0/24] 
	I0612 20:35:18.988432       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:35:18.988453       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a] <==
	I0612 20:28:18.935723       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 20:28:19.678635       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 20:28:19.692646       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0612 20:28:19.822166       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 20:28:32.793517       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0612 20:28:33.193380       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0612 20:30:26.015026       1 wrap.go:54] timeout or abort while handling: method=POST URI="/api/v1/namespaces/kube-system/events" audit-ID="bd780f3c-7a4e-4ef7-b113-51a12949e669"
	E0612 20:30:26.015079       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 161.105µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0612 20:30:26.015100       1 timeout.go:142] post-timeout activity - time-elapsed: 2.376µs, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0612 20:32:02.452698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47728: use of closed network connection
	E0612 20:32:02.659766       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47748: use of closed network connection
	E0612 20:32:02.846816       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47764: use of closed network connection
	E0612 20:32:03.062574       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47782: use of closed network connection
	E0612 20:32:03.248844       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47808: use of closed network connection
	E0612 20:32:03.438792       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47836: use of closed network connection
	E0612 20:32:03.612771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47848: use of closed network connection
	E0612 20:32:03.798577       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47862: use of closed network connection
	E0612 20:32:03.969493       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47866: use of closed network connection
	E0612 20:32:04.284697       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47888: use of closed network connection
	E0612 20:32:04.469044       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47914: use of closed network connection
	E0612 20:32:04.667569       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47928: use of closed network connection
	E0612 20:32:04.868188       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47948: use of closed network connection
	E0612 20:32:05.057257       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47962: use of closed network connection
	E0612 20:32:05.237804       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47968: use of closed network connection
	W0612 20:33:18.823492       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.196 192.168.39.76]
	
	
	==> kube-controller-manager [1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf] <==
	I0612 20:31:58.294558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.015879ms"
	I0612 20:31:58.341531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.911887ms"
	I0612 20:31:58.341859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.962µs"
	I0612 20:31:58.480463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.565299ms"
	I0612 20:31:58.480603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.865µs"
	I0612 20:31:59.760811       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.899µs"
	I0612 20:31:59.772055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.52µs"
	I0612 20:31:59.777140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.055µs"
	I0612 20:31:59.796890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.149µs"
	I0612 20:31:59.807477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.919µs"
	I0612 20:31:59.816716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="300.243µs"
	I0612 20:32:01.517901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.269598ms"
	I0612 20:32:01.517997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.857µs"
	I0612 20:32:01.771751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.45305ms"
	I0612 20:32:01.771971       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.672µs"
	I0612 20:32:02.014075       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.476415ms"
	I0612 20:32:02.014298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.137µs"
	E0612 20:32:34.920751       1 certificate_controller.go:146] Sync csr-jhkfg failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-jhkfg": the object has been modified; please apply your changes to the latest version and try again
	I0612 20:32:35.208325       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-844626-m04\" does not exist"
	I0612 20:32:35.227686       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-844626-m04" podCIDRs=["10.244.3.0/24"]
	I0612 20:32:37.305745       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-844626-m04"
	I0612 20:32:45.641684       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844626-m04"
	I0612 20:33:42.350025       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844626-m04"
	I0612 20:33:42.508379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.703911ms"
	I0612 20:33:42.508535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.547µs"
	
	
	==> kube-proxy [b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c] <==
	I0612 20:28:34.147183       1 server_linux.go:69] "Using iptables proxy"
	I0612 20:28:34.165061       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.196"]
	I0612 20:28:34.245342       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:28:34.245407       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:28:34.245424       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:28:34.255837       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:28:34.256333       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:28:34.256391       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:28:34.257947       1 config.go:192] "Starting service config controller"
	I0612 20:28:34.258011       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:28:34.258065       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:28:34.258085       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:28:34.260520       1 config.go:319] "Starting node config controller"
	I0612 20:28:34.261519       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:28:34.358924       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 20:28:34.359015       1 shared_informer.go:320] Caches are synced for service config
	I0612 20:28:34.361763       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92] <==
	W0612 20:28:18.299945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0612 20:28:18.299998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0612 20:28:18.312918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:28:18.312948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0612 20:28:18.314410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0612 20:28:18.314482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0612 20:28:18.342701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0612 20:28:18.342749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0612 20:28:18.433677       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 20:28:18.433733       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 20:28:21.051023       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0612 20:32:35.318772       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pwr4p\": pod kindnet-pwr4p is already assigned to node \"ha-844626-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pwr4p" node="ha-844626-m04"
	E0612 20:32:35.318997       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9757a4d6-0eb4-4893-8673-17fbeb293219(kube-system/kindnet-pwr4p) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pwr4p"
	E0612 20:32:35.319032       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pwr4p\": pod kindnet-pwr4p is already assigned to node \"ha-844626-m04\"" pod="kube-system/kindnet-pwr4p"
	I0612 20:32:35.319080       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pwr4p" node="ha-844626-m04"
	E0612 20:32:35.330850       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dbk2r\": pod kube-proxy-dbk2r is already assigned to node \"ha-844626-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dbk2r" node="ha-844626-m04"
	E0612 20:32:35.330959       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3de040c5-ed32-45b2-94d6-b89ca999a410(kube-system/kube-proxy-dbk2r) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dbk2r"
	E0612 20:32:35.331033       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dbk2r\": pod kube-proxy-dbk2r is already assigned to node \"ha-844626-m04\"" pod="kube-system/kube-proxy-dbk2r"
	I0612 20:32:35.331056       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dbk2r" node="ha-844626-m04"
	E0612 20:32:35.356582       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hnnqg\": pod kube-proxy-hnnqg is already assigned to node \"ha-844626-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hnnqg" node="ha-844626-m04"
	E0612 20:32:35.356735       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hnnqg\": pod kube-proxy-hnnqg is already assigned to node \"ha-844626-m04\"" pod="kube-system/kube-proxy-hnnqg"
	E0612 20:32:35.367332       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-45rls\": pod kindnet-45rls is already assigned to node \"ha-844626-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-45rls" node="ha-844626-m04"
	E0612 20:32:35.367412       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b1aac0cb-9a25-43e6-88e9-99b045417097(kube-system/kindnet-45rls) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-45rls"
	E0612 20:32:35.367432       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-45rls\": pod kindnet-45rls is already assigned to node \"ha-844626-m04\"" pod="kube-system/kindnet-45rls"
	I0612 20:32:35.367452       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-45rls" node="ha-844626-m04"
	
	
	==> kubelet <==
	Jun 12 20:31:58 ha-844626 kubelet[1371]: I0612 20:31:58.941802    1371 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tbdz8\" (UniqueName: \"kubernetes.io/projected/ffb6ae9a-674c-4892-97cf-c8b2a315a7c8-kube-api-access-tbdz8\") on node \"ha-844626\" DevicePath \"\""
	Jun 12 20:31:58 ha-844626 kubelet[1371]: I0612 20:31:58.941835    1371 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-72wc7\" (UniqueName: \"kubernetes.io/projected/131dbc49-9ce2-4a52-8f71-5fc48385f5cf-kube-api-access-72wc7\") on node \"ha-844626\" DevicePath \"\""
	Jun 12 20:31:59 ha-844626 kubelet[1371]: I0612 20:31:59.794259    1371 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e89b7b2-9401-4076-bc0c-54749c16daf5" path="/var/lib/kubelet/pods/9e89b7b2-9401-4076-bc0c-54749c16daf5/volumes"
	Jun 12 20:31:59 ha-844626 kubelet[1371]: I0612 20:31:59.794546    1371 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb6ae9a-674c-4892-97cf-c8b2a315a7c8" path="/var/lib/kubelet/pods/ffb6ae9a-674c-4892-97cf-c8b2a315a7c8/volumes"
	Jun 12 20:32:01 ha-844626 kubelet[1371]: I0612 20:32:01.795788    1371 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="131dbc49-9ce2-4a52-8f71-5fc48385f5cf" path="/var/lib/kubelet/pods/131dbc49-9ce2-4a52-8f71-5fc48385f5cf/volumes"
	Jun 12 20:32:19 ha-844626 kubelet[1371]: E0612 20:32:19.808314    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:32:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:32:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:32:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:32:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:33:19 ha-844626 kubelet[1371]: E0612 20:33:19.806297    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:33:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:33:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:33:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:33:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:34:19 ha-844626 kubelet[1371]: E0612 20:34:19.807083    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:34:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:34:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:34:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:34:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:35:19 ha-844626 kubelet[1371]: E0612 20:35:19.807301    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:35:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:35:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:35:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:35:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-844626 -n ha-844626
helpers_test.go:261: (dbg) Run:  kubectl --context ha-844626 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (60.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr: exit status 3 (3.2264542s)

                                                
                                                
-- stdout --
	ha-844626
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-844626-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:35:31.474036   37629 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:35:31.474146   37629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:31.474156   37629 out.go:304] Setting ErrFile to fd 2...
	I0612 20:35:31.474161   37629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:31.474368   37629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:35:31.474560   37629 out.go:298] Setting JSON to false
	I0612 20:35:31.474582   37629 mustload.go:65] Loading cluster: ha-844626
	I0612 20:35:31.474635   37629 notify.go:220] Checking for updates...
	I0612 20:35:31.475505   37629 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:35:31.475577   37629 status.go:255] checking status of ha-844626 ...
	I0612 20:35:31.476672   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:31.476725   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:31.493550   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I0612 20:35:31.493940   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:31.494511   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:31.494538   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:31.494953   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:31.495136   37629 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:35:31.496827   37629 status.go:330] ha-844626 host status = "Running" (err=<nil>)
	I0612 20:35:31.496843   37629 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:31.497141   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:31.497181   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:31.511800   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40639
	I0612 20:35:31.512235   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:31.512720   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:31.512748   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:31.513008   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:31.513164   37629 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:35:31.515950   37629 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:31.516358   37629 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:31.516383   37629 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:31.516656   37629 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:31.516974   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:31.517015   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:31.531967   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0612 20:35:31.532339   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:31.532835   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:31.532855   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:31.533211   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:31.533401   37629 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:35:31.533585   37629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:31.533609   37629 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:35:31.536418   37629 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:31.536826   37629 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:31.536853   37629 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:31.537066   37629 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:35:31.537248   37629 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:35:31.537412   37629 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:35:31.537560   37629 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:35:31.618655   37629 ssh_runner.go:195] Run: systemctl --version
	I0612 20:35:31.626313   37629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:31.642697   37629 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:35:31.642723   37629 api_server.go:166] Checking apiserver status ...
	I0612 20:35:31.642751   37629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:35:31.657127   37629 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0612 20:35:31.667332   37629 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:35:31.667402   37629 ssh_runner.go:195] Run: ls
	I0612 20:35:31.671808   37629 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:35:31.676131   37629 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:35:31.676155   37629 status.go:422] ha-844626 apiserver status = Running (err=<nil>)
	I0612 20:35:31.676164   37629 status.go:257] ha-844626 status: &{Name:ha-844626 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:35:31.676180   37629 status.go:255] checking status of ha-844626-m02 ...
	I0612 20:35:31.676452   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:31.676476   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:31.692420   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I0612 20:35:31.692808   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:31.693251   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:31.693271   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:31.693604   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:31.693811   37629 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:35:31.695491   37629 status.go:330] ha-844626-m02 host status = "Running" (err=<nil>)
	I0612 20:35:31.695506   37629 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:31.695799   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:31.695826   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:31.710264   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0612 20:35:31.710609   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:31.711050   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:31.711094   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:31.711421   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:31.711611   37629 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:35:31.714581   37629 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:31.715105   37629 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:31.715132   37629 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:31.715297   37629 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:31.715595   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:31.715637   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:31.730958   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39843
	I0612 20:35:31.731474   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:31.731977   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:31.731996   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:31.732314   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:31.732477   37629 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:35:31.732730   37629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:31.732757   37629 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:35:31.735586   37629 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:31.736103   37629 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:31.736130   37629 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:31.736266   37629 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:35:31.736432   37629 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:35:31.736594   37629 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:35:31.736743   37629 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	W0612 20:35:34.287464   37629 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.108:22: connect: no route to host
	W0612 20:35:34.287582   37629 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	E0612 20:35:34.287609   37629 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:34.287617   37629 status.go:257] ha-844626-m02 status: &{Name:ha-844626-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0612 20:35:34.287634   37629 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:34.287641   37629 status.go:255] checking status of ha-844626-m03 ...
	I0612 20:35:34.287952   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:34.288003   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:34.304142   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44735
	I0612 20:35:34.304666   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:34.305213   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:34.305246   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:34.305617   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:34.305814   37629 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:35:34.307766   37629 status.go:330] ha-844626-m03 host status = "Running" (err=<nil>)
	I0612 20:35:34.307793   37629 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:35:34.308129   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:34.308163   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:34.324524   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0612 20:35:34.324957   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:34.325574   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:34.325608   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:34.325969   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:34.326195   37629 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:35:34.329217   37629 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:34.329683   37629 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:35:34.329712   37629 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:34.329890   37629 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:35:34.330241   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:34.330301   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:34.346096   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39061
	I0612 20:35:34.346513   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:34.346973   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:34.346993   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:34.347359   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:34.347574   37629 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:35:34.347797   37629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:34.347818   37629 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:35:34.350848   37629 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:34.351229   37629 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:35:34.351255   37629 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:34.351373   37629 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:35:34.351548   37629 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:35:34.351696   37629 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:35:34.351837   37629 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:35:34.431923   37629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:34.450288   37629 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:35:34.450315   37629 api_server.go:166] Checking apiserver status ...
	I0612 20:35:34.450347   37629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:35:34.466564   37629 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0612 20:35:34.478646   37629 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:35:34.478706   37629 ssh_runner.go:195] Run: ls
	I0612 20:35:34.483575   37629 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:35:34.488229   37629 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:35:34.488253   37629 status.go:422] ha-844626-m03 apiserver status = Running (err=<nil>)
	I0612 20:35:34.488262   37629 status.go:257] ha-844626-m03 status: &{Name:ha-844626-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:35:34.488276   37629 status.go:255] checking status of ha-844626-m04 ...
	I0612 20:35:34.488629   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:34.488665   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:34.505294   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36373
	I0612 20:35:34.505689   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:34.506285   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:34.506307   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:34.506583   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:34.506798   37629 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:35:34.508428   37629 status.go:330] ha-844626-m04 host status = "Running" (err=<nil>)
	I0612 20:35:34.508446   37629 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:35:34.508717   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:34.508737   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:34.523387   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34909
	I0612 20:35:34.523822   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:34.524477   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:34.524502   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:34.524823   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:34.525083   37629 main.go:141] libmachine: (ha-844626-m04) Calling .GetIP
	I0612 20:35:34.528194   37629 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:34.528747   37629 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:35:34.528776   37629 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:34.528931   37629 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:35:34.529418   37629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:34.529468   37629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:34.544685   37629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0612 20:35:34.545109   37629 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:34.545625   37629 main.go:141] libmachine: Using API Version  1
	I0612 20:35:34.545644   37629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:34.545919   37629 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:34.546068   37629 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:35:34.546212   37629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:34.546235   37629 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:35:34.549358   37629 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:34.549787   37629 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:35:34.549817   37629 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:34.549985   37629 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:35:34.550142   37629 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:35:34.550274   37629 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:35:34.550420   37629 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	I0612 20:35:34.640443   37629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:34.659802   37629 status.go:257] ha-844626-m04 status: &{Name:ha-844626-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr: exit status 3 (5.253122134s)

                                                
                                                
-- stdout --
	ha-844626
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-844626-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:35:35.583915   37729 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:35:35.584168   37729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:35.584178   37729 out.go:304] Setting ErrFile to fd 2...
	I0612 20:35:35.584182   37729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:35.584385   37729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:35:35.584542   37729 out.go:298] Setting JSON to false
	I0612 20:35:35.584562   37729 mustload.go:65] Loading cluster: ha-844626
	I0612 20:35:35.584691   37729 notify.go:220] Checking for updates...
	I0612 20:35:35.585073   37729 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:35:35.585093   37729 status.go:255] checking status of ha-844626 ...
	I0612 20:35:35.585548   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:35.585613   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:35.605192   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0612 20:35:35.605663   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:35.606257   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:35.606286   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:35.606695   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:35.606897   37729 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:35:35.608464   37729 status.go:330] ha-844626 host status = "Running" (err=<nil>)
	I0612 20:35:35.608490   37729 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:35.608854   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:35.608899   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:35.624069   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I0612 20:35:35.624437   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:35.624880   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:35.624899   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:35.625241   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:35.625444   37729 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:35:35.628446   37729 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:35.628848   37729 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:35.628880   37729 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:35.629104   37729 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:35.629492   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:35.629545   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:35.645203   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0612 20:35:35.645620   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:35.646119   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:35.646140   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:35.646433   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:35.646629   37729 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:35:35.646778   37729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:35.646798   37729 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:35:35.649838   37729 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:35.650218   37729 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:35.650245   37729 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:35.650408   37729 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:35:35.650592   37729 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:35:35.650733   37729 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:35:35.650906   37729 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:35:35.731148   37729 ssh_runner.go:195] Run: systemctl --version
	I0612 20:35:35.737545   37729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:35.753169   37729 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:35:35.753201   37729 api_server.go:166] Checking apiserver status ...
	I0612 20:35:35.753243   37729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:35:35.767897   37729 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0612 20:35:35.777743   37729 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:35:35.777803   37729 ssh_runner.go:195] Run: ls
	I0612 20:35:35.782303   37729 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:35:35.788526   37729 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:35:35.788555   37729 status.go:422] ha-844626 apiserver status = Running (err=<nil>)
	I0612 20:35:35.788569   37729 status.go:257] ha-844626 status: &{Name:ha-844626 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:35:35.788592   37729 status.go:255] checking status of ha-844626-m02 ...
	I0612 20:35:35.789002   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:35.789056   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:35.804595   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33807
	I0612 20:35:35.805040   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:35.805574   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:35.805598   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:35.805908   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:35.806121   37729 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:35:35.807982   37729 status.go:330] ha-844626-m02 host status = "Running" (err=<nil>)
	I0612 20:35:35.807997   37729 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:35.808363   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:35.808399   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:35.824046   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42509
	I0612 20:35:35.824576   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:35.825075   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:35.825110   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:35.825479   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:35.825700   37729 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:35:35.829131   37729 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:35.829622   37729 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:35.829654   37729 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:35.829798   37729 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:35.830218   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:35.830260   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:35.845574   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40103
	I0612 20:35:35.846040   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:35.846554   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:35.846580   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:35.846910   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:35.847155   37729 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:35:35.847380   37729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:35.847399   37729 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:35:35.850753   37729 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:35.851257   37729 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:35.851349   37729 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:35.851445   37729 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:35:35.851618   37729 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:35:35.851786   37729 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:35:35.852112   37729 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	W0612 20:35:37.359585   37729 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:37.359654   37729 retry.go:31] will retry after 184.676405ms: dial tcp 192.168.39.108:22: connect: no route to host
	W0612 20:35:40.431528   37729 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.108:22: connect: no route to host
	W0612 20:35:40.431629   37729 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	E0612 20:35:40.431650   37729 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:40.431656   37729 status.go:257] ha-844626-m02 status: &{Name:ha-844626-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0612 20:35:40.431673   37729 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:40.431680   37729 status.go:255] checking status of ha-844626-m03 ...
	I0612 20:35:40.431973   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:40.432013   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:40.447212   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38925
	I0612 20:35:40.447653   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:40.448204   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:40.448233   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:40.448590   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:40.448778   37729 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:35:40.450682   37729 status.go:330] ha-844626-m03 host status = "Running" (err=<nil>)
	I0612 20:35:40.450700   37729 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:35:40.451064   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:40.451108   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:40.466000   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0612 20:35:40.466450   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:40.466926   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:40.466946   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:40.467300   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:40.467517   37729 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:35:40.470430   37729 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:40.470942   37729 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:35:40.470982   37729 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:40.471120   37729 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:35:40.471484   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:40.471524   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:40.490362   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46669
	I0612 20:35:40.490749   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:40.491199   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:40.491227   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:40.491585   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:40.491739   37729 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:35:40.491955   37729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:40.491979   37729 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:35:40.495033   37729 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:40.495479   37729 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:35:40.495514   37729 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:40.495689   37729 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:35:40.495828   37729 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:35:40.495985   37729 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:35:40.496165   37729 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:35:40.576999   37729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:40.591772   37729 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:35:40.591794   37729 api_server.go:166] Checking apiserver status ...
	I0612 20:35:40.591831   37729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:35:40.605803   37729 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0612 20:35:40.617323   37729 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:35:40.617383   37729 ssh_runner.go:195] Run: ls
	I0612 20:35:40.622229   37729 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:35:40.630354   37729 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:35:40.630384   37729 status.go:422] ha-844626-m03 apiserver status = Running (err=<nil>)
	I0612 20:35:40.630395   37729 status.go:257] ha-844626-m03 status: &{Name:ha-844626-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:35:40.630417   37729 status.go:255] checking status of ha-844626-m04 ...
	I0612 20:35:40.630813   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:40.630855   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:40.645994   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42515
	I0612 20:35:40.646545   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:40.647076   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:40.647097   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:40.647454   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:40.647629   37729 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:35:40.649460   37729 status.go:330] ha-844626-m04 host status = "Running" (err=<nil>)
	I0612 20:35:40.649477   37729 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:35:40.649750   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:40.649807   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:40.665824   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I0612 20:35:40.666225   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:40.666693   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:40.666714   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:40.667042   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:40.667293   37729 main.go:141] libmachine: (ha-844626-m04) Calling .GetIP
	I0612 20:35:40.670284   37729 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:40.670697   37729 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:35:40.670724   37729 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:40.670862   37729 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:35:40.671211   37729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:40.671245   37729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:40.688640   37729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45979
	I0612 20:35:40.689024   37729 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:40.689450   37729 main.go:141] libmachine: Using API Version  1
	I0612 20:35:40.689470   37729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:40.689714   37729 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:40.689863   37729 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:35:40.690047   37729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:40.690076   37729 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:35:40.692942   37729 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:40.693498   37729 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:35:40.693522   37729 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:40.693684   37729 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:35:40.693831   37729 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:35:40.693959   37729 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:35:40.694083   37729 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	I0612 20:35:40.779584   37729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:40.796113   37729 status.go:257] ha-844626-m04 status: &{Name:ha-844626-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr: exit status 3 (4.444556579s)

                                                
                                                
-- stdout --
	ha-844626
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-844626-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:35:42.724771   37828 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:35:42.725020   37828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:42.725032   37828 out.go:304] Setting ErrFile to fd 2...
	I0612 20:35:42.725037   37828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:42.725272   37828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:35:42.725434   37828 out.go:298] Setting JSON to false
	I0612 20:35:42.725459   37828 mustload.go:65] Loading cluster: ha-844626
	I0612 20:35:42.725530   37828 notify.go:220] Checking for updates...
	I0612 20:35:42.725871   37828 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:35:42.725888   37828 status.go:255] checking status of ha-844626 ...
	I0612 20:35:42.726475   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:42.726535   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:42.744714   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I0612 20:35:42.745248   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:42.745879   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:42.745904   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:42.746235   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:42.746405   37828 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:35:42.747977   37828 status.go:330] ha-844626 host status = "Running" (err=<nil>)
	I0612 20:35:42.747999   37828 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:42.748310   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:42.748364   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:42.763112   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0612 20:35:42.763528   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:42.763950   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:42.763975   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:42.764310   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:42.764470   37828 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:35:42.767121   37828 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:42.767633   37828 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:42.767661   37828 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:42.767736   37828 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:42.768041   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:42.768076   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:42.782738   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46485
	I0612 20:35:42.783160   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:42.783791   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:42.783813   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:42.784182   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:42.784390   37828 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:35:42.784565   37828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:42.784583   37828 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:35:42.788189   37828 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:42.788649   37828 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:42.788678   37828 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:42.788862   37828 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:35:42.789079   37828 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:35:42.789239   37828 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:35:42.789378   37828 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:35:42.875419   37828 ssh_runner.go:195] Run: systemctl --version
	I0612 20:35:42.882019   37828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:42.897580   37828 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:35:42.897610   37828 api_server.go:166] Checking apiserver status ...
	I0612 20:35:42.897648   37828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:35:42.913130   37828 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0612 20:35:42.923600   37828 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:35:42.923669   37828 ssh_runner.go:195] Run: ls
	I0612 20:35:42.928480   37828 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:35:42.934775   37828 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:35:42.934806   37828 status.go:422] ha-844626 apiserver status = Running (err=<nil>)
	I0612 20:35:42.934819   37828 status.go:257] ha-844626 status: &{Name:ha-844626 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:35:42.934845   37828 status.go:255] checking status of ha-844626-m02 ...
	I0612 20:35:42.935164   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:42.935227   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:42.950527   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0612 20:35:42.950965   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:42.951520   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:42.951545   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:42.951889   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:42.952071   37828 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:35:42.953765   37828 status.go:330] ha-844626-m02 host status = "Running" (err=<nil>)
	I0612 20:35:42.953783   37828 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:42.954147   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:42.954181   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:42.969363   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0612 20:35:42.969742   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:42.970220   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:42.970248   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:42.970547   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:42.970783   37828 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:35:42.973618   37828 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:42.974075   37828 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:42.974101   37828 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:42.974265   37828 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:42.974540   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:42.974572   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:42.989086   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0612 20:35:42.989590   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:42.990061   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:42.990079   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:42.990373   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:42.990526   37828 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:35:42.990726   37828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:42.990747   37828 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:35:42.993507   37828 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:42.994004   37828 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:42.994028   37828 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:42.994171   37828 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:35:42.994359   37828 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:35:42.994503   37828 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:35:42.994689   37828 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	W0612 20:35:43.503448   37828 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:43.503499   37828 retry.go:31] will retry after 193.62401ms: dial tcp 192.168.39.108:22: connect: no route to host
	W0612 20:35:46.767505   37828 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.108:22: connect: no route to host
	W0612 20:35:46.767588   37828 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	E0612 20:35:46.767603   37828 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:46.767613   37828 status.go:257] ha-844626-m02 status: &{Name:ha-844626-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0612 20:35:46.767637   37828 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:46.767645   37828 status.go:255] checking status of ha-844626-m03 ...
	I0612 20:35:46.767930   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:46.767989   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:46.782842   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0612 20:35:46.783332   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:46.783865   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:46.783891   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:46.784193   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:46.784354   37828 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:35:46.785996   37828 status.go:330] ha-844626-m03 host status = "Running" (err=<nil>)
	I0612 20:35:46.786016   37828 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:35:46.786492   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:46.786542   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:46.802970   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35547
	I0612 20:35:46.803417   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:46.803870   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:46.803894   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:46.804231   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:46.804428   37828 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:35:46.807163   37828 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:46.807695   37828 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:35:46.807723   37828 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:46.807901   37828 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:35:46.808231   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:46.808272   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:46.823540   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44965
	I0612 20:35:46.823908   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:46.824391   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:46.824413   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:46.824709   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:46.824871   37828 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:35:46.825052   37828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:46.825070   37828 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:35:46.827768   37828 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:46.828218   37828 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:35:46.828238   37828 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:46.828453   37828 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:35:46.828607   37828 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:35:46.828740   37828 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:35:46.828875   37828 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:35:46.907690   37828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:46.923142   37828 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:35:46.923192   37828 api_server.go:166] Checking apiserver status ...
	I0612 20:35:46.923235   37828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:35:46.937500   37828 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0612 20:35:46.947587   37828 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:35:46.947643   37828 ssh_runner.go:195] Run: ls
	I0612 20:35:46.952238   37828 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:35:46.959164   37828 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:35:46.959208   37828 status.go:422] ha-844626-m03 apiserver status = Running (err=<nil>)
	I0612 20:35:46.959217   37828 status.go:257] ha-844626-m03 status: &{Name:ha-844626-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:35:46.959230   37828 status.go:255] checking status of ha-844626-m04 ...
	I0612 20:35:46.959519   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:46.959551   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:46.975679   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0612 20:35:46.976118   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:46.976629   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:46.976650   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:46.976981   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:46.977196   37828 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:35:46.978906   37828 status.go:330] ha-844626-m04 host status = "Running" (err=<nil>)
	I0612 20:35:46.978924   37828 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:35:46.979275   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:46.979318   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:46.994082   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39733
	I0612 20:35:46.994559   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:46.995131   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:46.995156   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:46.995497   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:46.995666   37828 main.go:141] libmachine: (ha-844626-m04) Calling .GetIP
	I0612 20:35:46.998784   37828 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:46.999320   37828 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:35:46.999343   37828 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:46.999509   37828 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:35:46.999972   37828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:47.000025   37828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:47.015853   37828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36599
	I0612 20:35:47.016288   37828 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:47.016823   37828 main.go:141] libmachine: Using API Version  1
	I0612 20:35:47.016844   37828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:47.017171   37828 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:47.017393   37828 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:35:47.017619   37828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:47.017642   37828 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:35:47.021722   37828 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:47.022185   37828 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:35:47.022210   37828 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:47.022371   37828 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:35:47.022657   37828 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:35:47.022828   37828 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:35:47.023017   37828 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	I0612 20:35:47.111838   37828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:47.126970   37828 status.go:257] ha-844626-m04 status: &{Name:ha-844626-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr: exit status 3 (3.716298393s)

                                                
                                                
-- stdout --
	ha-844626
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-844626-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:35:50.492011   37945 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:35:50.492324   37945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:50.492344   37945 out.go:304] Setting ErrFile to fd 2...
	I0612 20:35:50.492352   37945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:50.492559   37945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:35:50.492737   37945 out.go:298] Setting JSON to false
	I0612 20:35:50.492760   37945 mustload.go:65] Loading cluster: ha-844626
	I0612 20:35:50.492823   37945 notify.go:220] Checking for updates...
	I0612 20:35:50.493184   37945 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:35:50.493204   37945 status.go:255] checking status of ha-844626 ...
	I0612 20:35:50.493722   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:50.493796   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:50.510103   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0612 20:35:50.510567   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:50.511158   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:50.511204   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:50.511552   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:50.511733   37945 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:35:50.513285   37945 status.go:330] ha-844626 host status = "Running" (err=<nil>)
	I0612 20:35:50.513303   37945 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:50.513585   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:50.513636   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:50.529960   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0612 20:35:50.530421   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:50.530852   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:50.530870   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:50.531225   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:50.531412   37945 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:35:50.534075   37945 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:50.534454   37945 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:50.534474   37945 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:50.534644   37945 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:50.534976   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:50.535021   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:50.550737   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41241
	I0612 20:35:50.551162   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:50.551683   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:50.551704   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:50.552065   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:50.552293   37945 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:35:50.552542   37945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:50.552573   37945 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:35:50.555534   37945 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:50.556061   37945 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:50.556091   37945 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:50.556235   37945 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:35:50.556402   37945 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:35:50.556520   37945 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:35:50.556702   37945 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:35:50.635184   37945 ssh_runner.go:195] Run: systemctl --version
	I0612 20:35:50.641542   37945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:50.657730   37945 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:35:50.657763   37945 api_server.go:166] Checking apiserver status ...
	I0612 20:35:50.657805   37945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:35:50.675519   37945 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0612 20:35:50.685695   37945 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:35:50.685756   37945 ssh_runner.go:195] Run: ls
	I0612 20:35:50.690863   37945 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:35:50.695027   37945 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:35:50.695048   37945 status.go:422] ha-844626 apiserver status = Running (err=<nil>)
	I0612 20:35:50.695058   37945 status.go:257] ha-844626 status: &{Name:ha-844626 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:35:50.695072   37945 status.go:255] checking status of ha-844626-m02 ...
	I0612 20:35:50.695392   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:50.695425   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:50.710674   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0612 20:35:50.711073   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:50.711522   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:50.711541   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:50.711870   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:50.712042   37945 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:35:50.713397   37945 status.go:330] ha-844626-m02 host status = "Running" (err=<nil>)
	I0612 20:35:50.713416   37945 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:50.713809   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:50.713865   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:50.729461   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0612 20:35:50.729852   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:50.730350   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:50.730368   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:50.730637   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:50.730829   37945 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:35:50.733419   37945 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:50.733865   37945 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:50.733893   37945 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:50.734052   37945 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:50.734369   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:50.734403   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:50.749205   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45229
	I0612 20:35:50.749661   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:50.750172   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:50.750202   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:50.750520   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:50.750720   37945 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:35:50.750905   37945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:50.750922   37945 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:35:50.753926   37945 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:50.754360   37945 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:50.754378   37945 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:50.754597   37945 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:35:50.754775   37945 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:35:50.754919   37945 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:35:50.755044   37945 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	W0612 20:35:53.807487   37945 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.108:22: connect: no route to host
	W0612 20:35:53.807575   37945 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	E0612 20:35:53.807593   37945 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:53.807620   37945 status.go:257] ha-844626-m02 status: &{Name:ha-844626-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0612 20:35:53.807642   37945 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:35:53.807655   37945 status.go:255] checking status of ha-844626-m03 ...
	I0612 20:35:53.808299   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:53.808350   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:53.824486   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0612 20:35:53.824967   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:53.825442   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:53.825462   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:53.825740   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:53.825941   37945 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:35:53.827566   37945 status.go:330] ha-844626-m03 host status = "Running" (err=<nil>)
	I0612 20:35:53.827584   37945 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:35:53.827979   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:53.828019   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:53.842444   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33469
	I0612 20:35:53.842930   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:53.843484   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:53.843510   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:53.843816   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:53.844026   37945 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:35:53.846701   37945 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:53.847106   37945 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:35:53.847144   37945 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:53.847366   37945 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:35:53.847691   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:53.847736   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:53.862400   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40543
	I0612 20:35:53.862783   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:53.863264   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:53.863295   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:53.863639   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:53.863867   37945 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:35:53.864105   37945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:53.864129   37945 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:35:53.867411   37945 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:53.867935   37945 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:35:53.867963   37945 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:35:53.868144   37945 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:35:53.868335   37945 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:35:53.868506   37945 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:35:53.868614   37945 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:35:53.946926   37945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:53.962172   37945 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:35:53.962196   37945 api_server.go:166] Checking apiserver status ...
	I0612 20:35:53.962233   37945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:35:53.980364   37945 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0612 20:35:53.991736   37945 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:35:53.991812   37945 ssh_runner.go:195] Run: ls
	I0612 20:35:53.996842   37945 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:35:54.001446   37945 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:35:54.001478   37945 status.go:422] ha-844626-m03 apiserver status = Running (err=<nil>)
	I0612 20:35:54.001489   37945 status.go:257] ha-844626-m03 status: &{Name:ha-844626-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:35:54.001509   37945 status.go:255] checking status of ha-844626-m04 ...
	I0612 20:35:54.001890   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:54.001938   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:54.016813   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0612 20:35:54.017186   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:54.017662   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:54.017682   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:54.017987   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:54.018177   37945 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:35:54.019651   37945 status.go:330] ha-844626-m04 host status = "Running" (err=<nil>)
	I0612 20:35:54.019668   37945 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:35:54.019925   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:54.019958   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:54.034652   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37031
	I0612 20:35:54.035122   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:54.035680   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:54.035699   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:54.036045   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:54.036211   37945 main.go:141] libmachine: (ha-844626-m04) Calling .GetIP
	I0612 20:35:54.039128   37945 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:54.039616   37945 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:35:54.039644   37945 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:54.039793   37945 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:35:54.040138   37945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:54.040181   37945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:54.055825   37945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35149
	I0612 20:35:54.056242   37945 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:54.056699   37945 main.go:141] libmachine: Using API Version  1
	I0612 20:35:54.056721   37945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:54.057032   37945 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:54.057222   37945 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:35:54.057375   37945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:54.057393   37945 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:35:54.060028   37945 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:54.060360   37945 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:35:54.060389   37945 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:35:54.060508   37945 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:35:54.060654   37945 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:35:54.060813   37945 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:35:54.060955   37945 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	I0612 20:35:54.147609   37945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:54.164531   37945 status.go:257] ha-844626-m04 status: &{Name:ha-844626-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr: exit status 3 (3.746394321s)

                                                
                                                
-- stdout --
	ha-844626
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-844626-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:35:57.442490   38045 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:35:57.442758   38045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:57.442767   38045 out.go:304] Setting ErrFile to fd 2...
	I0612 20:35:57.442772   38045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:35:57.443093   38045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:35:57.443364   38045 out.go:298] Setting JSON to false
	I0612 20:35:57.443396   38045 mustload.go:65] Loading cluster: ha-844626
	I0612 20:35:57.443434   38045 notify.go:220] Checking for updates...
	I0612 20:35:57.443860   38045 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:35:57.443878   38045 status.go:255] checking status of ha-844626 ...
	I0612 20:35:57.444328   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:57.444403   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:57.461645   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
	I0612 20:35:57.462042   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:57.462601   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:35:57.462622   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:57.462976   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:57.463203   38045 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:35:57.464628   38045 status.go:330] ha-844626 host status = "Running" (err=<nil>)
	I0612 20:35:57.464643   38045 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:57.464931   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:57.464963   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:57.480784   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I0612 20:35:57.481235   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:57.481727   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:35:57.481754   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:57.482045   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:57.482220   38045 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:35:57.485079   38045 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:57.485544   38045 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:57.485569   38045 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:57.485770   38045 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:35:57.486124   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:57.486179   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:57.501219   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40459
	I0612 20:35:57.501624   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:57.502076   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:35:57.502096   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:57.502380   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:57.502538   38045 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:35:57.502749   38045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:57.502778   38045 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:35:57.505852   38045 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:57.506235   38045 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:35:57.506261   38045 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:35:57.506384   38045 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:35:57.506564   38045 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:35:57.506724   38045 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:35:57.506886   38045 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:35:57.587331   38045 ssh_runner.go:195] Run: systemctl --version
	I0612 20:35:57.594161   38045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:35:57.611199   38045 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:35:57.611229   38045 api_server.go:166] Checking apiserver status ...
	I0612 20:35:57.611261   38045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:35:57.625193   38045 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0612 20:35:57.634648   38045 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:35:57.634696   38045 ssh_runner.go:195] Run: ls
	I0612 20:35:57.639168   38045 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:35:57.644404   38045 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:35:57.644426   38045 status.go:422] ha-844626 apiserver status = Running (err=<nil>)
	I0612 20:35:57.644435   38045 status.go:257] ha-844626 status: &{Name:ha-844626 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:35:57.644464   38045 status.go:255] checking status of ha-844626-m02 ...
	I0612 20:35:57.644764   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:57.644802   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:57.659972   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34719
	I0612 20:35:57.660407   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:57.660836   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:35:57.660857   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:57.661183   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:57.661461   38045 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:35:57.663387   38045 status.go:330] ha-844626-m02 host status = "Running" (err=<nil>)
	I0612 20:35:57.663402   38045 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:57.663675   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:57.663744   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:57.678994   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0612 20:35:57.679522   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:57.679981   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:35:57.680009   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:57.680364   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:57.680574   38045 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:35:57.683545   38045 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:57.683990   38045 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:57.684018   38045 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:57.684164   38045 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:35:57.684511   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:35:57.684550   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:35:57.700884   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I0612 20:35:57.701271   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:35:57.701815   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:35:57.701839   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:35:57.702159   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:35:57.702329   38045 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:35:57.702532   38045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:35:57.702557   38045 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:35:57.705496   38045 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:57.705925   38045 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:35:57.705949   38045 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:35:57.706091   38045 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:35:57.706272   38045 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:35:57.706425   38045 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:35:57.706579   38045 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	W0612 20:36:00.787426   38045 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.108:22: connect: no route to host
	W0612 20:36:00.787515   38045 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	E0612 20:36:00.787533   38045 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:36:00.787543   38045 status.go:257] ha-844626-m02 status: &{Name:ha-844626-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0612 20:36:00.787558   38045 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:36:00.787616   38045 status.go:255] checking status of ha-844626-m03 ...
	I0612 20:36:00.788172   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:00.788223   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:00.805251   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0612 20:36:00.805679   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:00.806136   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:36:00.806158   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:00.806456   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:00.806651   38045 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:36:00.808428   38045 status.go:330] ha-844626-m03 host status = "Running" (err=<nil>)
	I0612 20:36:00.808444   38045 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:36:00.808795   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:00.808833   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:00.824928   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I0612 20:36:00.825383   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:00.825886   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:36:00.825912   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:00.826199   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:00.826379   38045 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:36:00.829142   38045 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:00.829566   38045 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:36:00.829597   38045 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:00.829832   38045 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:36:00.830289   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:00.830360   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:00.847325   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41213
	I0612 20:36:00.847721   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:00.848242   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:36:00.848279   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:00.848570   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:00.848753   38045 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:36:00.848973   38045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:00.848992   38045 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:36:00.852055   38045 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:00.852491   38045 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:36:00.852520   38045 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:00.852649   38045 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:36:00.852808   38045 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:36:00.852951   38045 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:36:00.853117   38045 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:36:00.931953   38045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:36:00.947972   38045 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:36:00.948031   38045 api_server.go:166] Checking apiserver status ...
	I0612 20:36:00.948085   38045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:36:00.961938   38045 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0612 20:36:00.974309   38045 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:36:00.974386   38045 ssh_runner.go:195] Run: ls
	I0612 20:36:00.979545   38045 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:36:00.984063   38045 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:36:00.984086   38045 status.go:422] ha-844626-m03 apiserver status = Running (err=<nil>)
	I0612 20:36:00.984097   38045 status.go:257] ha-844626-m03 status: &{Name:ha-844626-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:36:00.984117   38045 status.go:255] checking status of ha-844626-m04 ...
	I0612 20:36:00.984592   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:00.984642   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:00.999714   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46471
	I0612 20:36:01.000112   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:01.000563   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:36:01.000586   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:01.000888   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:01.001108   38045 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:36:01.002680   38045 status.go:330] ha-844626-m04 host status = "Running" (err=<nil>)
	I0612 20:36:01.002698   38045 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:36:01.003005   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:01.003039   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:01.018178   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41337
	I0612 20:36:01.018561   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:01.018968   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:36:01.018986   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:01.019383   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:01.019554   38045 main.go:141] libmachine: (ha-844626-m04) Calling .GetIP
	I0612 20:36:01.022595   38045 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:01.023146   38045 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:36:01.023188   38045 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:01.023468   38045 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:36:01.023798   38045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:01.023838   38045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:01.038719   38045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0612 20:36:01.039078   38045 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:01.039546   38045 main.go:141] libmachine: Using API Version  1
	I0612 20:36:01.039564   38045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:01.039869   38045 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:01.040080   38045 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:36:01.040274   38045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:01.040298   38045 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:36:01.043142   38045 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:01.043610   38045 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:36:01.043647   38045 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:01.043781   38045 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:36:01.043941   38045 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:36:01.044111   38045 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:36:01.044222   38045 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	I0612 20:36:01.130693   38045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:36:01.145945   38045 status.go:257] ha-844626-m04 status: &{Name:ha-844626-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr: exit status 3 (3.731748205s)

                                                
                                                
-- stdout --
	ha-844626
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-844626-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:36:05.784752   38162 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:36:05.785260   38162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:36:05.785311   38162 out.go:304] Setting ErrFile to fd 2...
	I0612 20:36:05.785329   38162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:36:05.785767   38162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:36:05.786105   38162 out.go:298] Setting JSON to false
	I0612 20:36:05.786155   38162 mustload.go:65] Loading cluster: ha-844626
	I0612 20:36:05.786284   38162 notify.go:220] Checking for updates...
	I0612 20:36:05.786922   38162 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:36:05.786945   38162 status.go:255] checking status of ha-844626 ...
	I0612 20:36:05.787488   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:05.787542   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:05.802607   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34525
	I0612 20:36:05.803107   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:05.803691   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:05.803713   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:05.804180   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:05.804429   38162 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:36:05.806312   38162 status.go:330] ha-844626 host status = "Running" (err=<nil>)
	I0612 20:36:05.806327   38162 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:36:05.806601   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:05.806633   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:05.821954   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44739
	I0612 20:36:05.822361   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:05.822785   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:05.822808   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:05.823096   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:05.823312   38162 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:36:05.826576   38162 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:05.827061   38162 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:36:05.827094   38162 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:05.827283   38162 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:36:05.827687   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:05.827766   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:05.842965   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0612 20:36:05.843444   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:05.843905   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:05.843925   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:05.844233   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:05.844448   38162 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:36:05.844683   38162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:05.844710   38162 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:36:05.847618   38162 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:05.847993   38162 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:36:05.848031   38162 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:05.848144   38162 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:36:05.848320   38162 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:36:05.848490   38162 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:36:05.848633   38162 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:36:05.927680   38162 ssh_runner.go:195] Run: systemctl --version
	I0612 20:36:05.934485   38162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:36:05.953097   38162 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:36:05.953129   38162 api_server.go:166] Checking apiserver status ...
	I0612 20:36:05.953178   38162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:36:05.968734   38162 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0612 20:36:05.978774   38162 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:36:05.978864   38162 ssh_runner.go:195] Run: ls
	I0612 20:36:05.983779   38162 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:36:05.989598   38162 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:36:05.989621   38162 status.go:422] ha-844626 apiserver status = Running (err=<nil>)
	I0612 20:36:05.989631   38162 status.go:257] ha-844626 status: &{Name:ha-844626 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:36:05.989656   38162 status.go:255] checking status of ha-844626-m02 ...
	I0612 20:36:05.989978   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:05.990014   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:06.006412   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I0612 20:36:06.006797   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:06.007301   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:06.007322   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:06.007669   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:06.007868   38162 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:36:06.009400   38162 status.go:330] ha-844626-m02 host status = "Running" (err=<nil>)
	I0612 20:36:06.009418   38162 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:36:06.009735   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:06.009765   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:06.024447   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41305
	I0612 20:36:06.024854   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:06.025307   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:06.025331   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:06.025743   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:06.025947   38162 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:36:06.028632   38162 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:36:06.029064   38162 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:36:06.029085   38162 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:36:06.029232   38162 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:36:06.029541   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:06.029595   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:06.044472   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34341
	I0612 20:36:06.044858   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:06.045315   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:06.045338   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:06.045680   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:06.045839   38162 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:36:06.046020   38162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:06.046059   38162 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:36:06.048913   38162 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:36:06.049377   38162 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:36:06.049401   38162 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:36:06.049538   38162 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:36:06.049693   38162 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:36:06.049849   38162 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:36:06.049977   38162 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	W0612 20:36:09.107428   38162 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.108:22: connect: no route to host
	W0612 20:36:09.107509   38162 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	E0612 20:36:09.107527   38162 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:36:09.107536   38162 status.go:257] ha-844626-m02 status: &{Name:ha-844626-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0612 20:36:09.107556   38162 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.108:22: connect: no route to host
	I0612 20:36:09.107594   38162 status.go:255] checking status of ha-844626-m03 ...
	I0612 20:36:09.107961   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:09.108009   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:09.122735   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0612 20:36:09.123134   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:09.123745   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:09.123767   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:09.124085   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:09.124381   38162 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:36:09.126213   38162 status.go:330] ha-844626-m03 host status = "Running" (err=<nil>)
	I0612 20:36:09.126240   38162 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:36:09.126626   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:09.126675   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:09.141240   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42661
	I0612 20:36:09.141580   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:09.142045   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:09.142074   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:09.142372   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:09.142568   38162 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:36:09.145661   38162 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:09.146188   38162 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:36:09.146224   38162 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:09.146316   38162 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:36:09.146585   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:09.146621   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:09.161880   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39141
	I0612 20:36:09.162309   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:09.162790   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:09.162816   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:09.163214   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:09.163417   38162 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:36:09.163590   38162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:09.163612   38162 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:36:09.166746   38162 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:09.167220   38162 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:36:09.167239   38162 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:09.167462   38162 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:36:09.167607   38162 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:36:09.167746   38162 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:36:09.167894   38162 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:36:09.250000   38162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:36:09.269000   38162 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:36:09.269033   38162 api_server.go:166] Checking apiserver status ...
	I0612 20:36:09.269074   38162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:36:09.284917   38162 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0612 20:36:09.296138   38162 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:36:09.296203   38162 ssh_runner.go:195] Run: ls
	I0612 20:36:09.301370   38162 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:36:09.308540   38162 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:36:09.308563   38162 status.go:422] ha-844626-m03 apiserver status = Running (err=<nil>)
	I0612 20:36:09.308571   38162 status.go:257] ha-844626-m03 status: &{Name:ha-844626-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:36:09.308585   38162 status.go:255] checking status of ha-844626-m04 ...
	I0612 20:36:09.308955   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:09.309006   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:09.325167   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0612 20:36:09.325672   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:09.326167   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:09.326196   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:09.326532   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:09.326767   38162 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:36:09.328618   38162 status.go:330] ha-844626-m04 host status = "Running" (err=<nil>)
	I0612 20:36:09.328635   38162 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:36:09.328926   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:09.328961   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:09.344119   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36749
	I0612 20:36:09.344604   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:09.345165   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:09.345193   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:09.345506   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:09.345747   38162 main.go:141] libmachine: (ha-844626-m04) Calling .GetIP
	I0612 20:36:09.348765   38162 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:09.349222   38162 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:36:09.349255   38162 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:09.349393   38162 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:36:09.349684   38162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:09.349737   38162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:09.365216   38162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38351
	I0612 20:36:09.365628   38162 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:09.366079   38162 main.go:141] libmachine: Using API Version  1
	I0612 20:36:09.366106   38162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:09.366445   38162 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:09.366632   38162 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:36:09.366798   38162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:09.366818   38162 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:36:09.369347   38162 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:09.369766   38162 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:36:09.369809   38162 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:09.369944   38162 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:36:09.370115   38162 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:36:09.370237   38162 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:36:09.370362   38162 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	I0612 20:36:09.459312   38162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:36:09.474401   38162 status.go:257] ha-844626-m04 status: &{Name:ha-844626-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr: exit status 7 (610.942363ms)

                                                
                                                
-- stdout --
	ha-844626
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-844626-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:36:20.752605   38316 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:36:20.752838   38316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:36:20.752847   38316 out.go:304] Setting ErrFile to fd 2...
	I0612 20:36:20.752851   38316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:36:20.753052   38316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:36:20.753208   38316 out.go:298] Setting JSON to false
	I0612 20:36:20.753230   38316 mustload.go:65] Loading cluster: ha-844626
	I0612 20:36:20.753370   38316 notify.go:220] Checking for updates...
	I0612 20:36:20.753648   38316 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:36:20.753664   38316 status.go:255] checking status of ha-844626 ...
	I0612 20:36:20.754140   38316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:20.754218   38316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:20.771978   38316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36943
	I0612 20:36:20.772396   38316 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:20.772978   38316 main.go:141] libmachine: Using API Version  1
	I0612 20:36:20.773008   38316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:20.773312   38316 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:20.773528   38316 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:36:20.775128   38316 status.go:330] ha-844626 host status = "Running" (err=<nil>)
	I0612 20:36:20.775147   38316 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:36:20.775501   38316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:20.775542   38316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:20.791371   38316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I0612 20:36:20.791769   38316 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:20.792261   38316 main.go:141] libmachine: Using API Version  1
	I0612 20:36:20.792279   38316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:20.792603   38316 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:20.792776   38316 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:36:20.795592   38316 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:20.796138   38316 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:36:20.796176   38316 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:20.796355   38316 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:36:20.796727   38316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:20.796765   38316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:20.812116   38316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I0612 20:36:20.812617   38316 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:20.813269   38316 main.go:141] libmachine: Using API Version  1
	I0612 20:36:20.813290   38316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:20.813555   38316 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:20.813721   38316 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:36:20.813861   38316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:20.813890   38316 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:36:20.816710   38316 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:20.817128   38316 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:36:20.817154   38316 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:20.817301   38316 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:36:20.817495   38316 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:36:20.817639   38316 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:36:20.817826   38316 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:36:20.899228   38316 ssh_runner.go:195] Run: systemctl --version
	I0612 20:36:20.905992   38316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:36:20.922214   38316 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:36:20.922248   38316 api_server.go:166] Checking apiserver status ...
	I0612 20:36:20.922292   38316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:36:20.936262   38316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0612 20:36:20.945652   38316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:36:20.945695   38316 ssh_runner.go:195] Run: ls
	I0612 20:36:20.950165   38316 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:36:20.954411   38316 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:36:20.954430   38316 status.go:422] ha-844626 apiserver status = Running (err=<nil>)
	I0612 20:36:20.954438   38316 status.go:257] ha-844626 status: &{Name:ha-844626 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:36:20.954451   38316 status.go:255] checking status of ha-844626-m02 ...
	I0612 20:36:20.954740   38316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:20.954773   38316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:20.970802   38316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33107
	I0612 20:36:20.971207   38316 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:20.971706   38316 main.go:141] libmachine: Using API Version  1
	I0612 20:36:20.971729   38316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:20.972095   38316 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:20.972339   38316 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:36:20.974083   38316 status.go:330] ha-844626-m02 host status = "Stopped" (err=<nil>)
	I0612 20:36:20.974094   38316 status.go:343] host is not running, skipping remaining checks
	I0612 20:36:20.974099   38316 status.go:257] ha-844626-m02 status: &{Name:ha-844626-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:36:20.974113   38316 status.go:255] checking status of ha-844626-m03 ...
	I0612 20:36:20.974382   38316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:20.974412   38316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:20.989262   38316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34483
	I0612 20:36:20.989679   38316 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:20.990216   38316 main.go:141] libmachine: Using API Version  1
	I0612 20:36:20.990236   38316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:20.990565   38316 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:20.990742   38316 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:36:20.992432   38316 status.go:330] ha-844626-m03 host status = "Running" (err=<nil>)
	I0612 20:36:20.992450   38316 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:36:20.992740   38316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:20.992781   38316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:21.009097   38316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0612 20:36:21.009472   38316 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:21.009917   38316 main.go:141] libmachine: Using API Version  1
	I0612 20:36:21.009935   38316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:21.010247   38316 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:21.010442   38316 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:36:21.013264   38316 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:21.013755   38316 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:36:21.013778   38316 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:21.013915   38316 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:36:21.014350   38316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:21.014415   38316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:21.031179   38316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I0612 20:36:21.031590   38316 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:21.032011   38316 main.go:141] libmachine: Using API Version  1
	I0612 20:36:21.032033   38316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:21.032429   38316 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:21.032633   38316 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:36:21.032826   38316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:21.032844   38316 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:36:21.036180   38316 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:21.036633   38316 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:36:21.036657   38316 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:21.036869   38316 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:36:21.037099   38316 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:36:21.037289   38316 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:36:21.037474   38316 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:36:21.114803   38316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:36:21.131154   38316 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:36:21.131208   38316 api_server.go:166] Checking apiserver status ...
	I0612 20:36:21.131258   38316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:36:21.144386   38316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0612 20:36:21.153984   38316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:36:21.154044   38316 ssh_runner.go:195] Run: ls
	I0612 20:36:21.158489   38316 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:36:21.162933   38316 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:36:21.162952   38316 status.go:422] ha-844626-m03 apiserver status = Running (err=<nil>)
	I0612 20:36:21.162960   38316 status.go:257] ha-844626-m03 status: &{Name:ha-844626-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:36:21.162977   38316 status.go:255] checking status of ha-844626-m04 ...
	I0612 20:36:21.163296   38316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:21.163328   38316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:21.177934   38316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34171
	I0612 20:36:21.178292   38316 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:21.178731   38316 main.go:141] libmachine: Using API Version  1
	I0612 20:36:21.178759   38316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:21.179081   38316 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:21.179305   38316 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:36:21.180706   38316 status.go:330] ha-844626-m04 host status = "Running" (err=<nil>)
	I0612 20:36:21.180735   38316 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:36:21.180986   38316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:21.181020   38316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:21.194712   38316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I0612 20:36:21.195109   38316 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:21.195571   38316 main.go:141] libmachine: Using API Version  1
	I0612 20:36:21.195594   38316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:21.195859   38316 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:21.196043   38316 main.go:141] libmachine: (ha-844626-m04) Calling .GetIP
	I0612 20:36:21.198185   38316 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:21.198603   38316 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:36:21.198643   38316 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:21.198768   38316 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:36:21.199025   38316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:21.199061   38316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:21.213685   38316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44925
	I0612 20:36:21.214056   38316 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:21.214491   38316 main.go:141] libmachine: Using API Version  1
	I0612 20:36:21.214505   38316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:21.214759   38316 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:21.214945   38316 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:36:21.215131   38316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:21.215152   38316 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:36:21.217901   38316 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:21.218273   38316 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:36:21.218287   38316 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:21.218407   38316 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:36:21.218576   38316 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:36:21.218728   38316 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:36:21.218901   38316 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	I0612 20:36:21.303180   38316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:36:21.320616   38316 status.go:257] ha-844626-m04 status: &{Name:ha-844626-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr: exit status 7 (616.951708ms)

                                                
                                                
-- stdout --
	ha-844626
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-844626-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:36:29.422030   38420 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:36:29.422256   38420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:36:29.422264   38420 out.go:304] Setting ErrFile to fd 2...
	I0612 20:36:29.422268   38420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:36:29.422420   38420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:36:29.422562   38420 out.go:298] Setting JSON to false
	I0612 20:36:29.422583   38420 mustload.go:65] Loading cluster: ha-844626
	I0612 20:36:29.422624   38420 notify.go:220] Checking for updates...
	I0612 20:36:29.422905   38420 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:36:29.422918   38420 status.go:255] checking status of ha-844626 ...
	I0612 20:36:29.423332   38420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:29.423387   38420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:29.442196   38420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0612 20:36:29.442627   38420 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:29.443287   38420 main.go:141] libmachine: Using API Version  1
	I0612 20:36:29.443306   38420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:29.443662   38420 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:29.443877   38420 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:36:29.445491   38420 status.go:330] ha-844626 host status = "Running" (err=<nil>)
	I0612 20:36:29.445507   38420 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:36:29.445816   38420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:29.445859   38420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:29.460402   38420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33609
	I0612 20:36:29.460754   38420 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:29.461143   38420 main.go:141] libmachine: Using API Version  1
	I0612 20:36:29.461164   38420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:29.461490   38420 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:29.461720   38420 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:36:29.464400   38420 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:29.464873   38420 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:36:29.464903   38420 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:29.465030   38420 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:36:29.465315   38420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:29.465349   38420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:29.479529   38420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0612 20:36:29.479898   38420 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:29.480368   38420 main.go:141] libmachine: Using API Version  1
	I0612 20:36:29.480388   38420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:29.480676   38420 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:29.480887   38420 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:36:29.481079   38420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:29.481112   38420 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:36:29.483566   38420 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:29.483871   38420 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:36:29.483892   38420 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:36:29.484084   38420 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:36:29.484234   38420 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:36:29.484363   38420 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:36:29.484495   38420 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:36:29.566791   38420 ssh_runner.go:195] Run: systemctl --version
	I0612 20:36:29.573661   38420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:36:29.589847   38420 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:36:29.589877   38420 api_server.go:166] Checking apiserver status ...
	I0612 20:36:29.589915   38420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:36:29.605482   38420 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0612 20:36:29.615765   38420 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:36:29.615818   38420 ssh_runner.go:195] Run: ls
	I0612 20:36:29.621080   38420 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:36:29.625242   38420 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:36:29.625266   38420 status.go:422] ha-844626 apiserver status = Running (err=<nil>)
	I0612 20:36:29.625275   38420 status.go:257] ha-844626 status: &{Name:ha-844626 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:36:29.625295   38420 status.go:255] checking status of ha-844626-m02 ...
	I0612 20:36:29.625630   38420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:29.625665   38420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:29.641902   38420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34301
	I0612 20:36:29.642370   38420 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:29.642861   38420 main.go:141] libmachine: Using API Version  1
	I0612 20:36:29.642880   38420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:29.643213   38420 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:29.643466   38420 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:36:29.645128   38420 status.go:330] ha-844626-m02 host status = "Stopped" (err=<nil>)
	I0612 20:36:29.645143   38420 status.go:343] host is not running, skipping remaining checks
	I0612 20:36:29.645149   38420 status.go:257] ha-844626-m02 status: &{Name:ha-844626-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:36:29.645172   38420 status.go:255] checking status of ha-844626-m03 ...
	I0612 20:36:29.645601   38420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:29.645649   38420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:29.660673   38420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38881
	I0612 20:36:29.661054   38420 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:29.661503   38420 main.go:141] libmachine: Using API Version  1
	I0612 20:36:29.661524   38420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:29.661808   38420 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:29.661998   38420 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:36:29.663462   38420 status.go:330] ha-844626-m03 host status = "Running" (err=<nil>)
	I0612 20:36:29.663478   38420 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:36:29.663774   38420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:29.663807   38420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:29.678747   38420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43593
	I0612 20:36:29.679136   38420 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:29.679623   38420 main.go:141] libmachine: Using API Version  1
	I0612 20:36:29.679644   38420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:29.680132   38420 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:29.680339   38420 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:36:29.683270   38420 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:29.683668   38420 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:36:29.683692   38420 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:29.683816   38420 host.go:66] Checking if "ha-844626-m03" exists ...
	I0612 20:36:29.684102   38420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:29.684141   38420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:29.699877   38420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45555
	I0612 20:36:29.700380   38420 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:29.700891   38420 main.go:141] libmachine: Using API Version  1
	I0612 20:36:29.700917   38420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:29.701258   38420 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:29.701451   38420 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:36:29.701650   38420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:29.701669   38420 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:36:29.704556   38420 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:29.704883   38420 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:36:29.704903   38420 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:29.705068   38420 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:36:29.705230   38420 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:36:29.705417   38420 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:36:29.705593   38420 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:36:29.787482   38420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:36:29.802965   38420 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:36:29.803000   38420 api_server.go:166] Checking apiserver status ...
	I0612 20:36:29.803044   38420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:36:29.817441   38420 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0612 20:36:29.828291   38420 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:36:29.828361   38420 ssh_runner.go:195] Run: ls
	I0612 20:36:29.833511   38420 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:36:29.838751   38420 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:36:29.838774   38420 status.go:422] ha-844626-m03 apiserver status = Running (err=<nil>)
	I0612 20:36:29.838782   38420 status.go:257] ha-844626-m03 status: &{Name:ha-844626-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:36:29.838796   38420 status.go:255] checking status of ha-844626-m04 ...
	I0612 20:36:29.839069   38420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:29.839101   38420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:29.853815   38420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38097
	I0612 20:36:29.854220   38420 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:29.854719   38420 main.go:141] libmachine: Using API Version  1
	I0612 20:36:29.854741   38420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:29.855039   38420 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:29.855245   38420 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:36:29.856896   38420 status.go:330] ha-844626-m04 host status = "Running" (err=<nil>)
	I0612 20:36:29.856913   38420 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:36:29.857236   38420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:29.857294   38420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:29.872046   38420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I0612 20:36:29.872461   38420 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:29.872950   38420 main.go:141] libmachine: Using API Version  1
	I0612 20:36:29.872967   38420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:29.873206   38420 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:29.873355   38420 main.go:141] libmachine: (ha-844626-m04) Calling .GetIP
	I0612 20:36:29.875900   38420 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:29.876315   38420 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:36:29.876342   38420 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:29.876442   38420 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:36:29.876719   38420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:29.876750   38420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:29.892079   38420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I0612 20:36:29.892486   38420 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:29.892995   38420 main.go:141] libmachine: Using API Version  1
	I0612 20:36:29.893021   38420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:29.893346   38420 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:29.893554   38420 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:36:29.893731   38420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:36:29.893753   38420 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:36:29.896469   38420 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:29.896927   38420 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:36:29.896966   38420 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:29.897101   38420 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:36:29.897263   38420 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:36:29.897421   38420 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:36:29.897591   38420 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	I0612 20:36:29.982542   38420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:36:29.997545   38420 status.go:257] ha-844626-m04 status: &{Name:ha-844626-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-844626 -n ha-844626
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-844626 logs -n 25: (1.47310717s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626:/home/docker/cp-test_ha-844626-m03_ha-844626.txt                     |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626 sudo cat                                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m03_ha-844626.txt                               |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m02:/home/docker/cp-test_ha-844626-m03_ha-844626-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m02 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m03_ha-844626-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04:/home/docker/cp-test_ha-844626-m03_ha-844626-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m04 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m03_ha-844626-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp testdata/cp-test.txt                                              | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile43944605/001/cp-test_ha-844626-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626:/home/docker/cp-test_ha-844626-m04_ha-844626.txt                     |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626 sudo cat                                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626.txt                               |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m02:/home/docker/cp-test_ha-844626-m04_ha-844626-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m02 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03:/home/docker/cp-test_ha-844626-m04_ha-844626-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m03 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-844626 node stop m02 -v=7                                                   | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-844626 node start m02 -v=7                                                  | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:35 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 20:27:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 20:27:40.972412   32635 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:27:40.972656   32635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:27:40.972668   32635 out.go:304] Setting ErrFile to fd 2...
	I0612 20:27:40.972675   32635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:27:40.973281   32635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:27:40.974350   32635 out.go:298] Setting JSON to false
	I0612 20:27:40.975165   32635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4206,"bootTime":1718219855,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 20:27:40.975250   32635 start.go:139] virtualization: kvm guest
	I0612 20:27:40.977294   32635 out.go:177] * [ha-844626] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 20:27:40.979019   32635 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 20:27:40.980460   32635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 20:27:40.979033   32635 notify.go:220] Checking for updates...
	I0612 20:27:40.982970   32635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:27:40.984198   32635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:27:40.985582   32635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 20:27:40.987005   32635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 20:27:40.988431   32635 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 20:27:41.022803   32635 out.go:177] * Using the kvm2 driver based on user configuration
	I0612 20:27:41.024103   32635 start.go:297] selected driver: kvm2
	I0612 20:27:41.024119   32635 start.go:901] validating driver "kvm2" against <nil>
	I0612 20:27:41.024129   32635 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 20:27:41.024807   32635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:27:41.024879   32635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 20:27:41.039138   32635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 20:27:41.039192   32635 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 20:27:41.039394   32635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:27:41.039449   32635 cni.go:84] Creating CNI manager for ""
	I0612 20:27:41.039460   32635 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0612 20:27:41.039467   32635 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0612 20:27:41.039521   32635 start.go:340] cluster config:
	{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0612 20:27:41.039608   32635 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:27:41.041371   32635 out.go:177] * Starting "ha-844626" primary control-plane node in "ha-844626" cluster
	I0612 20:27:41.042634   32635 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:27:41.042666   32635 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 20:27:41.042675   32635 cache.go:56] Caching tarball of preloaded images
	I0612 20:27:41.042737   32635 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 20:27:41.042747   32635 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 20:27:41.043053   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:27:41.043073   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json: {Name:mked60f99278039b9c24d295779696b34306771a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:27:41.043256   32635 start.go:360] acquireMachinesLock for ha-844626: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 20:27:41.043302   32635 start.go:364] duration metric: took 22.479µs to acquireMachinesLock for "ha-844626"
	I0612 20:27:41.043320   32635 start.go:93] Provisioning new machine with config: &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:27:41.043378   32635 start.go:125] createHost starting for "" (driver="kvm2")
	I0612 20:27:41.045005   32635 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 20:27:41.045132   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:27:41.045181   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:27:41.059056   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0612 20:27:41.059495   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:27:41.059994   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:27:41.060014   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:27:41.060344   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:27:41.060538   32635 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:27:41.060668   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:27:41.060852   32635 start.go:159] libmachine.API.Create for "ha-844626" (driver="kvm2")
	I0612 20:27:41.060882   32635 client.go:168] LocalClient.Create starting
	I0612 20:27:41.060923   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 20:27:41.060965   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:27:41.060988   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:27:41.061068   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 20:27:41.061101   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:27:41.061122   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:27:41.061147   32635 main.go:141] libmachine: Running pre-create checks...
	I0612 20:27:41.061161   32635 main.go:141] libmachine: (ha-844626) Calling .PreCreateCheck
	I0612 20:27:41.061488   32635 main.go:141] libmachine: (ha-844626) Calling .GetConfigRaw
	I0612 20:27:41.061817   32635 main.go:141] libmachine: Creating machine...
	I0612 20:27:41.061831   32635 main.go:141] libmachine: (ha-844626) Calling .Create
	I0612 20:27:41.061947   32635 main.go:141] libmachine: (ha-844626) Creating KVM machine...
	I0612 20:27:41.063282   32635 main.go:141] libmachine: (ha-844626) DBG | found existing default KVM network
	I0612 20:27:41.063927   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:41.063790   32658 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0612 20:27:41.063954   32635 main.go:141] libmachine: (ha-844626) DBG | created network xml: 
	I0612 20:27:41.063966   32635 main.go:141] libmachine: (ha-844626) DBG | <network>
	I0612 20:27:41.063974   32635 main.go:141] libmachine: (ha-844626) DBG |   <name>mk-ha-844626</name>
	I0612 20:27:41.063980   32635 main.go:141] libmachine: (ha-844626) DBG |   <dns enable='no'/>
	I0612 20:27:41.063984   32635 main.go:141] libmachine: (ha-844626) DBG |   
	I0612 20:27:41.063991   32635 main.go:141] libmachine: (ha-844626) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0612 20:27:41.063997   32635 main.go:141] libmachine: (ha-844626) DBG |     <dhcp>
	I0612 20:27:41.064003   32635 main.go:141] libmachine: (ha-844626) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0612 20:27:41.064013   32635 main.go:141] libmachine: (ha-844626) DBG |     </dhcp>
	I0612 20:27:41.064025   32635 main.go:141] libmachine: (ha-844626) DBG |   </ip>
	I0612 20:27:41.064038   32635 main.go:141] libmachine: (ha-844626) DBG |   
	I0612 20:27:41.064055   32635 main.go:141] libmachine: (ha-844626) DBG | </network>
	I0612 20:27:41.064063   32635 main.go:141] libmachine: (ha-844626) DBG | 
	I0612 20:27:41.069240   32635 main.go:141] libmachine: (ha-844626) DBG | trying to create private KVM network mk-ha-844626 192.168.39.0/24...
	I0612 20:27:41.133290   32635 main.go:141] libmachine: (ha-844626) DBG | private KVM network mk-ha-844626 192.168.39.0/24 created
	I0612 20:27:41.133329   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:41.133261   32658 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:27:41.133343   32635 main.go:141] libmachine: (ha-844626) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626 ...
	I0612 20:27:41.133365   32635 main.go:141] libmachine: (ha-844626) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 20:27:41.133409   32635 main.go:141] libmachine: (ha-844626) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 20:27:41.359777   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:41.359654   32658 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa...
	I0612 20:27:41.706884   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:41.706757   32658 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/ha-844626.rawdisk...
	I0612 20:27:41.706926   32635 main.go:141] libmachine: (ha-844626) DBG | Writing magic tar header
	I0612 20:27:41.706936   32635 main.go:141] libmachine: (ha-844626) DBG | Writing SSH key tar header
	I0612 20:27:41.706949   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:41.706868   32658 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626 ...
	I0612 20:27:41.707033   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626 (perms=drwx------)
	I0612 20:27:41.707051   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626
	I0612 20:27:41.707063   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 20:27:41.707074   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 20:27:41.707085   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:27:41.707092   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 20:27:41.707107   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 20:27:41.707116   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home/jenkins
	I0612 20:27:41.707125   32635 main.go:141] libmachine: (ha-844626) DBG | Checking permissions on dir: /home
	I0612 20:27:41.707146   32635 main.go:141] libmachine: (ha-844626) DBG | Skipping /home - not owner
	I0612 20:27:41.707197   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 20:27:41.707234   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 20:27:41.707248   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 20:27:41.707267   32635 main.go:141] libmachine: (ha-844626) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 20:27:41.707281   32635 main.go:141] libmachine: (ha-844626) Creating domain...
	I0612 20:27:41.708187   32635 main.go:141] libmachine: (ha-844626) define libvirt domain using xml: 
	I0612 20:27:41.708209   32635 main.go:141] libmachine: (ha-844626) <domain type='kvm'>
	I0612 20:27:41.708219   32635 main.go:141] libmachine: (ha-844626)   <name>ha-844626</name>
	I0612 20:27:41.708230   32635 main.go:141] libmachine: (ha-844626)   <memory unit='MiB'>2200</memory>
	I0612 20:27:41.708241   32635 main.go:141] libmachine: (ha-844626)   <vcpu>2</vcpu>
	I0612 20:27:41.708252   32635 main.go:141] libmachine: (ha-844626)   <features>
	I0612 20:27:41.708263   32635 main.go:141] libmachine: (ha-844626)     <acpi/>
	I0612 20:27:41.708273   32635 main.go:141] libmachine: (ha-844626)     <apic/>
	I0612 20:27:41.708284   32635 main.go:141] libmachine: (ha-844626)     <pae/>
	I0612 20:27:41.708308   32635 main.go:141] libmachine: (ha-844626)     
	I0612 20:27:41.708321   32635 main.go:141] libmachine: (ha-844626)   </features>
	I0612 20:27:41.708332   32635 main.go:141] libmachine: (ha-844626)   <cpu mode='host-passthrough'>
	I0612 20:27:41.708340   32635 main.go:141] libmachine: (ha-844626)   
	I0612 20:27:41.708352   32635 main.go:141] libmachine: (ha-844626)   </cpu>
	I0612 20:27:41.708363   32635 main.go:141] libmachine: (ha-844626)   <os>
	I0612 20:27:41.708373   32635 main.go:141] libmachine: (ha-844626)     <type>hvm</type>
	I0612 20:27:41.708384   32635 main.go:141] libmachine: (ha-844626)     <boot dev='cdrom'/>
	I0612 20:27:41.708396   32635 main.go:141] libmachine: (ha-844626)     <boot dev='hd'/>
	I0612 20:27:41.708412   32635 main.go:141] libmachine: (ha-844626)     <bootmenu enable='no'/>
	I0612 20:27:41.708423   32635 main.go:141] libmachine: (ha-844626)   </os>
	I0612 20:27:41.708434   32635 main.go:141] libmachine: (ha-844626)   <devices>
	I0612 20:27:41.708445   32635 main.go:141] libmachine: (ha-844626)     <disk type='file' device='cdrom'>
	I0612 20:27:41.708459   32635 main.go:141] libmachine: (ha-844626)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/boot2docker.iso'/>
	I0612 20:27:41.708475   32635 main.go:141] libmachine: (ha-844626)       <target dev='hdc' bus='scsi'/>
	I0612 20:27:41.708499   32635 main.go:141] libmachine: (ha-844626)       <readonly/>
	I0612 20:27:41.708510   32635 main.go:141] libmachine: (ha-844626)     </disk>
	I0612 20:27:41.708518   32635 main.go:141] libmachine: (ha-844626)     <disk type='file' device='disk'>
	I0612 20:27:41.708533   32635 main.go:141] libmachine: (ha-844626)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 20:27:41.708549   32635 main.go:141] libmachine: (ha-844626)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/ha-844626.rawdisk'/>
	I0612 20:27:41.708562   32635 main.go:141] libmachine: (ha-844626)       <target dev='hda' bus='virtio'/>
	I0612 20:27:41.708576   32635 main.go:141] libmachine: (ha-844626)     </disk>
	I0612 20:27:41.708589   32635 main.go:141] libmachine: (ha-844626)     <interface type='network'>
	I0612 20:27:41.708600   32635 main.go:141] libmachine: (ha-844626)       <source network='mk-ha-844626'/>
	I0612 20:27:41.708613   32635 main.go:141] libmachine: (ha-844626)       <model type='virtio'/>
	I0612 20:27:41.708623   32635 main.go:141] libmachine: (ha-844626)     </interface>
	I0612 20:27:41.708634   32635 main.go:141] libmachine: (ha-844626)     <interface type='network'>
	I0612 20:27:41.708647   32635 main.go:141] libmachine: (ha-844626)       <source network='default'/>
	I0612 20:27:41.708660   32635 main.go:141] libmachine: (ha-844626)       <model type='virtio'/>
	I0612 20:27:41.708671   32635 main.go:141] libmachine: (ha-844626)     </interface>
	I0612 20:27:41.708681   32635 main.go:141] libmachine: (ha-844626)     <serial type='pty'>
	I0612 20:27:41.708691   32635 main.go:141] libmachine: (ha-844626)       <target port='0'/>
	I0612 20:27:41.708703   32635 main.go:141] libmachine: (ha-844626)     </serial>
	I0612 20:27:41.708719   32635 main.go:141] libmachine: (ha-844626)     <console type='pty'>
	I0612 20:27:41.708730   32635 main.go:141] libmachine: (ha-844626)       <target type='serial' port='0'/>
	I0612 20:27:41.708743   32635 main.go:141] libmachine: (ha-844626)     </console>
	I0612 20:27:41.708755   32635 main.go:141] libmachine: (ha-844626)     <rng model='virtio'>
	I0612 20:27:41.708767   32635 main.go:141] libmachine: (ha-844626)       <backend model='random'>/dev/random</backend>
	I0612 20:27:41.708779   32635 main.go:141] libmachine: (ha-844626)     </rng>
	I0612 20:27:41.708794   32635 main.go:141] libmachine: (ha-844626)     
	I0612 20:27:41.708805   32635 main.go:141] libmachine: (ha-844626)     
	I0612 20:27:41.708814   32635 main.go:141] libmachine: (ha-844626)   </devices>
	I0612 20:27:41.708823   32635 main.go:141] libmachine: (ha-844626) </domain>
	I0612 20:27:41.708833   32635 main.go:141] libmachine: (ha-844626) 
	I0612 20:27:41.712846   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:5b:21:1b in network default
	I0612 20:27:41.713412   32635 main.go:141] libmachine: (ha-844626) Ensuring networks are active...
	I0612 20:27:41.713434   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:41.714106   32635 main.go:141] libmachine: (ha-844626) Ensuring network default is active
	I0612 20:27:41.714440   32635 main.go:141] libmachine: (ha-844626) Ensuring network mk-ha-844626 is active
	I0612 20:27:41.715208   32635 main.go:141] libmachine: (ha-844626) Getting domain xml...
	I0612 20:27:41.716030   32635 main.go:141] libmachine: (ha-844626) Creating domain...
	I0612 20:27:42.877106   32635 main.go:141] libmachine: (ha-844626) Waiting to get IP...
	I0612 20:27:42.877937   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:42.878329   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:42.878352   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:42.878306   32658 retry.go:31] will retry after 251.928711ms: waiting for machine to come up
	I0612 20:27:43.132009   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:43.132528   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:43.132550   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:43.132471   32658 retry.go:31] will retry after 324.411916ms: waiting for machine to come up
	I0612 20:27:43.458826   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:43.459192   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:43.459216   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:43.459164   32658 retry.go:31] will retry after 316.141039ms: waiting for machine to come up
	I0612 20:27:43.776450   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:43.776803   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:43.776829   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:43.776773   32658 retry.go:31] will retry after 586.686885ms: waiting for machine to come up
	I0612 20:27:44.365246   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:44.365624   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:44.365655   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:44.365582   32658 retry.go:31] will retry after 589.180902ms: waiting for machine to come up
	I0612 20:27:44.956283   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:44.956690   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:44.956724   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:44.956654   32658 retry.go:31] will retry after 585.086589ms: waiting for machine to come up
	I0612 20:27:45.543269   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:45.543749   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:45.543786   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:45.543679   32658 retry.go:31] will retry after 723.01632ms: waiting for machine to come up
	I0612 20:27:46.268214   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:46.268654   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:46.268679   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:46.268627   32658 retry.go:31] will retry after 1.107858591s: waiting for machine to come up
	I0612 20:27:47.377938   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:47.378439   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:47.378464   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:47.378403   32658 retry.go:31] will retry after 1.845151914s: waiting for machine to come up
	I0612 20:27:49.224676   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:49.225081   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:49.225103   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:49.225017   32658 retry.go:31] will retry after 2.326337363s: waiting for machine to come up
	I0612 20:27:51.553288   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:51.553759   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:51.553788   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:51.553714   32658 retry.go:31] will retry after 2.857778141s: waiting for machine to come up
	I0612 20:27:54.414736   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:54.415212   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:54.415240   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:54.415137   32658 retry.go:31] will retry after 3.378845367s: waiting for machine to come up
	I0612 20:27:57.796199   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:27:57.796596   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find current IP address of domain ha-844626 in network mk-ha-844626
	I0612 20:27:57.796614   32635 main.go:141] libmachine: (ha-844626) DBG | I0612 20:27:57.796552   32658 retry.go:31] will retry after 3.490939997s: waiting for machine to come up
	I0612 20:28:01.289120   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.289570   32635 main.go:141] libmachine: (ha-844626) Found IP for machine: 192.168.39.196
	I0612 20:28:01.289590   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has current primary IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.289597   32635 main.go:141] libmachine: (ha-844626) Reserving static IP address...
	I0612 20:28:01.289895   32635 main.go:141] libmachine: (ha-844626) DBG | unable to find host DHCP lease matching {name: "ha-844626", mac: "52:54:00:8a:2d:9f", ip: "192.168.39.196"} in network mk-ha-844626
	I0612 20:28:01.363725   32635 main.go:141] libmachine: (ha-844626) Reserved static IP address: 192.168.39.196
	I0612 20:28:01.363754   32635 main.go:141] libmachine: (ha-844626) Waiting for SSH to be available...
	I0612 20:28:01.363764   32635 main.go:141] libmachine: (ha-844626) DBG | Getting to WaitForSSH function...
	I0612 20:28:01.366151   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.366560   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.366586   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.366707   32635 main.go:141] libmachine: (ha-844626) DBG | Using SSH client type: external
	I0612 20:28:01.366738   32635 main.go:141] libmachine: (ha-844626) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa (-rw-------)
	I0612 20:28:01.366782   32635 main.go:141] libmachine: (ha-844626) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 20:28:01.366797   32635 main.go:141] libmachine: (ha-844626) DBG | About to run SSH command:
	I0612 20:28:01.366810   32635 main.go:141] libmachine: (ha-844626) DBG | exit 0
	I0612 20:28:01.487570   32635 main.go:141] libmachine: (ha-844626) DBG | SSH cmd err, output: <nil>: 
	I0612 20:28:01.488003   32635 main.go:141] libmachine: (ha-844626) KVM machine creation complete!
	I0612 20:28:01.488243   32635 main.go:141] libmachine: (ha-844626) Calling .GetConfigRaw
	I0612 20:28:01.488719   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:01.488938   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:01.489119   32635 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 20:28:01.489134   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:28:01.490477   32635 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 20:28:01.490491   32635 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 20:28:01.490505   32635 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 20:28:01.490513   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:01.492740   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.493113   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.493143   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.493229   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:01.493420   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.493576   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.493724   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:01.493883   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:01.494178   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:01.494192   32635 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 20:28:01.594480   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:28:01.594515   32635 main.go:141] libmachine: Detecting the provisioner...
	I0612 20:28:01.594528   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:01.597525   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.597995   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.598018   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.598329   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:01.598531   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.598672   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.598810   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:01.598980   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:01.599236   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:01.599251   32635 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 20:28:01.699939   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 20:28:01.699997   32635 main.go:141] libmachine: found compatible host: buildroot
	I0612 20:28:01.700003   32635 main.go:141] libmachine: Provisioning with buildroot...
	I0612 20:28:01.700010   32635 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:28:01.700296   32635 buildroot.go:166] provisioning hostname "ha-844626"
	I0612 20:28:01.700319   32635 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:28:01.700529   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:01.703527   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.703955   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.703976   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.704077   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:01.704253   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.704415   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.704590   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:01.704785   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:01.704994   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:01.705008   32635 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844626 && echo "ha-844626" | sudo tee /etc/hostname
	I0612 20:28:01.822281   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844626
	
	I0612 20:28:01.822307   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:01.824810   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.825195   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.825228   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.825425   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:01.825594   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.825737   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:01.825833   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:01.825956   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:01.826125   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:01.826140   32635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844626' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844626/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844626' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 20:28:01.940075   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:28:01.940110   32635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 20:28:01.940139   32635 buildroot.go:174] setting up certificates
	I0612 20:28:01.940149   32635 provision.go:84] configureAuth start
	I0612 20:28:01.940158   32635 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:28:01.940481   32635 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:28:01.942968   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.943378   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.943405   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.943664   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:01.945708   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.946013   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:01.946031   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:01.946158   32635 provision.go:143] copyHostCerts
	I0612 20:28:01.946190   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:28:01.946237   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 20:28:01.946248   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:28:01.946320   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 20:28:01.946411   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:28:01.946445   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 20:28:01.946455   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:28:01.946493   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 20:28:01.946550   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:28:01.946573   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 20:28:01.946582   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:28:01.946614   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 20:28:01.946703   32635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.ha-844626 san=[127.0.0.1 192.168.39.196 ha-844626 localhost minikube]
	I0612 20:28:02.042742   32635 provision.go:177] copyRemoteCerts
	I0612 20:28:02.042800   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 20:28:02.042836   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.045415   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.045688   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.045731   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.045876   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.046057   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.046259   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.046382   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:02.126575   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0612 20:28:02.126659   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 20:28:02.152327   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0612 20:28:02.152398   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0612 20:28:02.176724   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0612 20:28:02.176783   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 20:28:02.200633   32635 provision.go:87] duration metric: took 260.470177ms to configureAuth
	I0612 20:28:02.200661   32635 buildroot.go:189] setting minikube options for container-runtime
	I0612 20:28:02.200875   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:28:02.200961   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.203680   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.204089   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.204118   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.204320   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.204515   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.204662   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.204801   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.205002   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:02.205171   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:02.205189   32635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 20:28:02.479832   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 20:28:02.479862   32635 main.go:141] libmachine: Checking connection to Docker...
	I0612 20:28:02.479885   32635 main.go:141] libmachine: (ha-844626) Calling .GetURL
	I0612 20:28:02.481131   32635 main.go:141] libmachine: (ha-844626) DBG | Using libvirt version 6000000
	I0612 20:28:02.483017   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.483369   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.483396   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.483571   32635 main.go:141] libmachine: Docker is up and running!
	I0612 20:28:02.483582   32635 main.go:141] libmachine: Reticulating splines...
	I0612 20:28:02.483588   32635 client.go:171] duration metric: took 21.422699477s to LocalClient.Create
	I0612 20:28:02.483608   32635 start.go:167] duration metric: took 21.422756924s to libmachine.API.Create "ha-844626"
	I0612 20:28:02.483616   32635 start.go:293] postStartSetup for "ha-844626" (driver="kvm2")
	I0612 20:28:02.483625   32635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 20:28:02.483639   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:02.483845   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 20:28:02.483877   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.486014   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.486321   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.486347   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.486478   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.486668   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.486800   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.486911   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:02.566307   32635 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 20:28:02.570568   32635 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 20:28:02.570595   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 20:28:02.570652   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 20:28:02.570753   32635 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 20:28:02.570768   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /etc/ssl/certs/214442.pem
	I0612 20:28:02.570903   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 20:28:02.580443   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:28:02.604400   32635 start.go:296] duration metric: took 120.770015ms for postStartSetup
	I0612 20:28:02.604443   32635 main.go:141] libmachine: (ha-844626) Calling .GetConfigRaw
	I0612 20:28:02.605074   32635 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:28:02.607655   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.607980   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.607998   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.608305   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:28:02.608477   32635 start.go:128] duration metric: took 21.565089753s to createHost
	I0612 20:28:02.608498   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.610703   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.611051   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.611069   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.611320   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.611512   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.611685   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.611821   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.611963   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:02.612195   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:28:02.612209   32635 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 20:28:02.716099   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224082.689142441
	
	I0612 20:28:02.716119   32635 fix.go:216] guest clock: 1718224082.689142441
	I0612 20:28:02.716126   32635 fix.go:229] Guest: 2024-06-12 20:28:02.689142441 +0000 UTC Remote: 2024-06-12 20:28:02.608489141 +0000 UTC m=+21.668937559 (delta=80.6533ms)
	I0612 20:28:02.716144   32635 fix.go:200] guest clock delta is within tolerance: 80.6533ms
	I0612 20:28:02.716149   32635 start.go:83] releasing machines lock for "ha-844626", held for 21.672839067s
	I0612 20:28:02.716166   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:02.716425   32635 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:28:02.719033   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.719441   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.719476   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.719585   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:02.720108   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:02.720308   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:02.720404   32635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 20:28:02.720458   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.720483   32635 ssh_runner.go:195] Run: cat /version.json
	I0612 20:28:02.720502   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:02.723003   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.723040   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.723416   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.723444   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.723475   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:02.723489   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:02.723568   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.723716   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:02.723727   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.723848   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.723920   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:02.723986   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:02.724041   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:02.724173   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:02.832342   32635 ssh_runner.go:195] Run: systemctl --version
	I0612 20:28:02.839025   32635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 20:28:03.005734   32635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 20:28:03.012212   32635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 20:28:03.012286   32635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 20:28:03.029159   32635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 20:28:03.029183   32635 start.go:494] detecting cgroup driver to use...
	I0612 20:28:03.029233   32635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 20:28:03.045339   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 20:28:03.059281   32635 docker.go:217] disabling cri-docker service (if available) ...
	I0612 20:28:03.059353   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 20:28:03.073629   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 20:28:03.087326   32635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 20:28:03.207418   32635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 20:28:03.358651   32635 docker.go:233] disabling docker service ...
	I0612 20:28:03.358723   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 20:28:03.373844   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 20:28:03.387977   32635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 20:28:03.525343   32635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 20:28:03.650448   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 20:28:03.665958   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 20:28:03.685409   32635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 20:28:03.685472   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.696471   32635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 20:28:03.696527   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.707401   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.717547   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.728800   32635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 20:28:03.740092   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.751133   32635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.768505   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:03.779732   32635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 20:28:03.790037   32635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 20:28:03.790104   32635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 20:28:03.804622   32635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 20:28:03.815507   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:28:03.932284   32635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 20:28:04.074714   32635 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 20:28:04.074788   32635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 20:28:04.079956   32635 start.go:562] Will wait 60s for crictl version
	I0612 20:28:04.080013   32635 ssh_runner.go:195] Run: which crictl
	I0612 20:28:04.083863   32635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 20:28:04.123823   32635 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 20:28:04.123927   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:28:04.152702   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:28:04.183406   32635 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 20:28:04.184860   32635 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:28:04.187810   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:04.188255   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:04.188290   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:04.188431   32635 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 20:28:04.192780   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:28:04.205768   32635 kubeadm.go:877] updating cluster {Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 20:28:04.205874   32635 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:28:04.205915   32635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:28:04.239424   32635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 20:28:04.239487   32635 ssh_runner.go:195] Run: which lz4
	I0612 20:28:04.243748   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0612 20:28:04.243871   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 20:28:04.248527   32635 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 20:28:04.248562   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 20:28:05.683994   32635 crio.go:462] duration metric: took 1.440168489s to copy over tarball
	I0612 20:28:05.684069   32635 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 20:28:07.793927   32635 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.109826987s)
	I0612 20:28:07.793959   32635 crio.go:469] duration metric: took 2.109938484s to extract the tarball
	I0612 20:28:07.793966   32635 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 20:28:07.833160   32635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:28:07.876721   32635 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 20:28:07.876749   32635 cache_images.go:84] Images are preloaded, skipping loading
	I0612 20:28:07.876758   32635 kubeadm.go:928] updating node { 192.168.39.196 8443 v1.30.1 crio true true} ...
	I0612 20:28:07.876885   32635 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844626 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 20:28:07.876969   32635 ssh_runner.go:195] Run: crio config
	I0612 20:28:07.926529   32635 cni.go:84] Creating CNI manager for ""
	I0612 20:28:07.926553   32635 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0612 20:28:07.926562   32635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 20:28:07.926587   32635 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-844626 NodeName:ha-844626 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 20:28:07.926722   32635 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-844626"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 20:28:07.926746   32635 kube-vip.go:115] generating kube-vip config ...
	I0612 20:28:07.926784   32635 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 20:28:07.943966   32635 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 20:28:07.944088   32635 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0612 20:28:07.944165   32635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 20:28:07.954861   32635 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 20:28:07.954939   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0612 20:28:07.964651   32635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0612 20:28:07.981438   32635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 20:28:07.997818   32635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0612 20:28:08.014061   32635 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0612 20:28:08.030286   32635 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0612 20:28:08.034165   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:28:08.046294   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:28:08.166144   32635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:28:08.184592   32635 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626 for IP: 192.168.39.196
	I0612 20:28:08.184616   32635 certs.go:194] generating shared ca certs ...
	I0612 20:28:08.184636   32635 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.184825   32635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 20:28:08.184876   32635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 20:28:08.184890   32635 certs.go:256] generating profile certs ...
	I0612 20:28:08.184953   32635 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key
	I0612 20:28:08.184971   32635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt with IP's: []
	I0612 20:28:08.252302   32635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt ...
	I0612 20:28:08.252333   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt: {Name:mkd4f9765dc2fdba49dd784d22bb60440d0a8c32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.252486   32635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key ...
	I0612 20:28:08.252497   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key: {Name:mk886b18d2e24f1c9aa1cd0d466e4744a6eefbc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.252569   32635 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.23a7e20a
	I0612 20:28:08.252583   32635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.23a7e20a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.254]
	I0612 20:28:08.355250   32635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.23a7e20a ...
	I0612 20:28:08.355273   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.23a7e20a: {Name:mkfdab9b803a4796bf933c99aedbe3d7f2c9d42d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.355419   32635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.23a7e20a ...
	I0612 20:28:08.355432   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.23a7e20a: {Name:mk231338a689f18482141f43a8c21a67e5049b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.355500   32635 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.23a7e20a -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt
	I0612 20:28:08.355581   32635 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.23a7e20a -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key
	I0612 20:28:08.355633   32635 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key
	I0612 20:28:08.355648   32635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt with IP's: []
	I0612 20:28:08.441754   32635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt ...
	I0612 20:28:08.441779   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt: {Name:mka449a40f128c0d8f283fbeb7606c82b8efeb35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.441911   32635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key ...
	I0612 20:28:08.441920   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key: {Name:mk89014c94a5f0f3d7cb3f60cd2c9fd7d27fbf9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:08.441983   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 20:28:08.441999   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0612 20:28:08.442009   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 20:28:08.442019   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 20:28:08.442031   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 20:28:08.442041   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 20:28:08.442051   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 20:28:08.442060   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 20:28:08.442103   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 20:28:08.442135   32635 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 20:28:08.442145   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 20:28:08.442165   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 20:28:08.442185   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 20:28:08.442206   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 20:28:08.442240   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:28:08.442276   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:28:08.442303   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem -> /usr/share/ca-certificates/21444.pem
	I0612 20:28:08.442316   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /usr/share/ca-certificates/214442.pem
	I0612 20:28:08.442785   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 20:28:08.468953   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 20:28:08.492177   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 20:28:08.515233   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 20:28:08.538548   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 20:28:08.561750   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 20:28:08.585113   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 20:28:08.609569   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 20:28:08.634385   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 20:28:08.658368   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 20:28:08.681497   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 20:28:08.712593   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 20:28:08.729762   32635 ssh_runner.go:195] Run: openssl version
	I0612 20:28:08.735819   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 20:28:08.746382   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:28:08.751012   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:28:08.751061   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:28:08.756796   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 20:28:08.767201   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 20:28:08.778150   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 20:28:08.782749   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 20:28:08.782796   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 20:28:08.788403   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 20:28:08.798461   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 20:28:08.809290   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 20:28:08.814009   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 20:28:08.814078   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 20:28:08.819834   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 20:28:08.831221   32635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 20:28:08.835779   32635 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 20:28:08.835840   32635 kubeadm.go:391] StartCluster: {Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:28:08.835929   32635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 20:28:08.835978   32635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 20:28:08.874542   32635 cri.go:89] found id: ""
	I0612 20:28:08.874608   32635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 20:28:08.884683   32635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 20:28:08.894058   32635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 20:28:08.903306   32635 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 20:28:08.903321   32635 kubeadm.go:156] found existing configuration files:
	
	I0612 20:28:08.903354   32635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 20:28:08.912378   32635 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 20:28:08.912422   32635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 20:28:08.921517   32635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 20:28:08.930469   32635 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 20:28:08.930522   32635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 20:28:08.939976   32635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 20:28:08.948907   32635 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 20:28:08.948953   32635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 20:28:08.961019   32635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 20:28:08.970075   32635 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 20:28:08.970133   32635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 20:28:08.980621   32635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 20:28:09.229616   32635 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 20:28:20.211010   32635 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 20:28:20.211085   32635 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 20:28:20.211184   32635 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 20:28:20.211342   32635 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 20:28:20.211478   32635 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 20:28:20.211560   32635 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 20:28:20.213439   32635 out.go:204]   - Generating certificates and keys ...
	I0612 20:28:20.213516   32635 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 20:28:20.213584   32635 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 20:28:20.213668   32635 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 20:28:20.213742   32635 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 20:28:20.213793   32635 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 20:28:20.213836   32635 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 20:28:20.213915   32635 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 20:28:20.214081   32635 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-844626 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0612 20:28:20.214154   32635 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 20:28:20.214311   32635 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-844626 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0612 20:28:20.214374   32635 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 20:28:20.214428   32635 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 20:28:20.214466   32635 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 20:28:20.214522   32635 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 20:28:20.214565   32635 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 20:28:20.214638   32635 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 20:28:20.214733   32635 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 20:28:20.214838   32635 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 20:28:20.214958   32635 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 20:28:20.215107   32635 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 20:28:20.215223   32635 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 20:28:20.216927   32635 out.go:204]   - Booting up control plane ...
	I0612 20:28:20.217047   32635 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 20:28:20.217163   32635 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 20:28:20.217226   32635 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 20:28:20.217322   32635 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 20:28:20.217403   32635 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 20:28:20.217462   32635 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 20:28:20.217645   32635 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 20:28:20.217739   32635 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 20:28:20.217818   32635 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.030994ms
	I0612 20:28:20.217923   32635 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 20:28:20.217978   32635 kubeadm.go:309] [api-check] The API server is healthy after 6.055837616s
	I0612 20:28:20.218073   32635 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 20:28:20.218200   32635 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 20:28:20.218272   32635 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 20:28:20.218434   32635 kubeadm.go:309] [mark-control-plane] Marking the node ha-844626 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 20:28:20.218519   32635 kubeadm.go:309] [bootstrap-token] Using token: rq2m6h.oorxndmx2szfgjlt
	I0612 20:28:20.219971   32635 out.go:204]   - Configuring RBAC rules ...
	I0612 20:28:20.220078   32635 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 20:28:20.220163   32635 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 20:28:20.220336   32635 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 20:28:20.220457   32635 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 20:28:20.220559   32635 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 20:28:20.220635   32635 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 20:28:20.220728   32635 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 20:28:20.220771   32635 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 20:28:20.220810   32635 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 20:28:20.220816   32635 kubeadm.go:309] 
	I0612 20:28:20.220904   32635 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 20:28:20.220928   32635 kubeadm.go:309] 
	I0612 20:28:20.221027   32635 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 20:28:20.221043   32635 kubeadm.go:309] 
	I0612 20:28:20.221091   32635 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 20:28:20.221168   32635 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 20:28:20.221244   32635 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 20:28:20.221257   32635 kubeadm.go:309] 
	I0612 20:28:20.221345   32635 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 20:28:20.221352   32635 kubeadm.go:309] 
	I0612 20:28:20.221408   32635 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 20:28:20.221417   32635 kubeadm.go:309] 
	I0612 20:28:20.221481   32635 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 20:28:20.221585   32635 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 20:28:20.221682   32635 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 20:28:20.221692   32635 kubeadm.go:309] 
	I0612 20:28:20.221804   32635 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 20:28:20.221888   32635 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 20:28:20.221908   32635 kubeadm.go:309] 
	I0612 20:28:20.221980   32635 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token rq2m6h.oorxndmx2szfgjlt \
	I0612 20:28:20.222077   32635 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 20:28:20.222099   32635 kubeadm.go:309] 	--control-plane 
	I0612 20:28:20.222103   32635 kubeadm.go:309] 
	I0612 20:28:20.222174   32635 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 20:28:20.222181   32635 kubeadm.go:309] 
	I0612 20:28:20.222257   32635 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token rq2m6h.oorxndmx2szfgjlt \
	I0612 20:28:20.222358   32635 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 20:28:20.222370   32635 cni.go:84] Creating CNI manager for ""
	I0612 20:28:20.222376   32635 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0612 20:28:20.224675   32635 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0612 20:28:20.226046   32635 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0612 20:28:20.231610   32635 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0612 20:28:20.231627   32635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0612 20:28:20.252220   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0612 20:28:20.618054   32635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 20:28:20.618128   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:20.618143   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844626 minikube.k8s.io/updated_at=2024_06_12T20_28_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=ha-844626 minikube.k8s.io/primary=true
	I0612 20:28:20.629957   32635 ops.go:34] apiserver oom_adj: -16
	I0612 20:28:20.720524   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:21.221483   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:21.720581   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:22.220693   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:22.721548   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:23.220607   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:23.720812   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:24.221346   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:24.720812   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:25.220680   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:25.721545   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:26.221495   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:26.721299   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:27.221251   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:27.721536   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:28.221381   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:28.720581   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:29.220635   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:29.721034   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:30.221414   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:30.721542   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:31.221065   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:31.720883   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:32.221359   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:32.720864   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:33.220798   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 20:28:33.330554   32635 kubeadm.go:1107] duration metric: took 12.712482777s to wait for elevateKubeSystemPrivileges
	W0612 20:28:33.330595   32635 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 20:28:33.330603   32635 kubeadm.go:393] duration metric: took 24.494765813s to StartCluster
	I0612 20:28:33.330619   32635 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:33.330684   32635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:28:33.331674   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:28:33.331871   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0612 20:28:33.331883   32635 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:28:33.331908   32635 start.go:240] waiting for startup goroutines ...
	I0612 20:28:33.331924   32635 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 20:28:33.331983   32635 addons.go:69] Setting storage-provisioner=true in profile "ha-844626"
	I0612 20:28:33.332006   32635 addons.go:69] Setting default-storageclass=true in profile "ha-844626"
	I0612 20:28:33.332014   32635 addons.go:234] Setting addon storage-provisioner=true in "ha-844626"
	I0612 20:28:33.332031   32635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-844626"
	I0612 20:28:33.332042   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:28:33.332086   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:28:33.332492   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:33.332492   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:33.332521   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:33.332546   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:33.347640   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46079
	I0612 20:28:33.347640   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45557
	I0612 20:28:33.348060   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:33.348074   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:33.348521   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:33.348537   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:33.348640   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:33.348662   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:33.348870   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:33.348931   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:33.349042   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:28:33.349473   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:33.349499   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:33.351192   32635 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:28:33.351535   32635 kapi.go:59] client config for ha-844626: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt", KeyFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key", CAFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfb000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 20:28:33.352103   32635 cert_rotation.go:137] Starting client certificate rotation controller
	I0612 20:28:33.352283   32635 addons.go:234] Setting addon default-storageclass=true in "ha-844626"
	I0612 20:28:33.352326   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:28:33.352678   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:33.352725   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:33.364666   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0612 20:28:33.365160   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:33.365649   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:33.365676   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:33.367379   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0612 20:28:33.367381   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:33.367675   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:28:33.367862   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:33.368302   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:33.368317   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:33.368651   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:33.369084   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:33.369115   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:33.369387   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:33.371474   32635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 20:28:33.372996   32635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 20:28:33.373014   32635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 20:28:33.373029   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:33.376276   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:33.376742   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:33.376767   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:33.376903   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:33.377075   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:33.377211   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:33.377338   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:33.383810   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0612 20:28:33.384112   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:33.384546   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:33.384571   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:33.384847   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:33.385008   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:28:33.386361   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:28:33.386579   32635 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 20:28:33.386593   32635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 20:28:33.386605   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:28:33.389543   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:33.389965   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:28:33.390000   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:28:33.390136   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:28:33.390252   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:28:33.390353   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:28:33.390447   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:28:33.480944   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0612 20:28:33.589149   32635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 20:28:33.591711   32635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 20:28:33.952850   32635 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0612 20:28:33.952917   32635 main.go:141] libmachine: Making call to close driver server
	I0612 20:28:33.952937   32635 main.go:141] libmachine: (ha-844626) Calling .Close
	I0612 20:28:33.953228   32635 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:28:33.953244   32635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:28:33.953248   32635 main.go:141] libmachine: (ha-844626) DBG | Closing plugin on server side
	I0612 20:28:33.953255   32635 main.go:141] libmachine: Making call to close driver server
	I0612 20:28:33.953266   32635 main.go:141] libmachine: (ha-844626) Calling .Close
	I0612 20:28:33.953539   32635 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:28:33.953543   32635 main.go:141] libmachine: (ha-844626) DBG | Closing plugin on server side
	I0612 20:28:33.953554   32635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:28:33.953679   32635 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0612 20:28:33.953691   32635 round_trippers.go:469] Request Headers:
	I0612 20:28:33.953702   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:28:33.953710   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:28:33.965113   32635 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0612 20:28:33.965633   32635 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0612 20:28:33.965647   32635 round_trippers.go:469] Request Headers:
	I0612 20:28:33.965654   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:28:33.965659   32635 round_trippers.go:473]     Content-Type: application/json
	I0612 20:28:33.965664   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:28:33.969161   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:28:33.969312   32635 main.go:141] libmachine: Making call to close driver server
	I0612 20:28:33.969329   32635 main.go:141] libmachine: (ha-844626) Calling .Close
	I0612 20:28:33.969586   32635 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:28:33.969598   32635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:28:34.294070   32635 main.go:141] libmachine: Making call to close driver server
	I0612 20:28:34.294209   32635 main.go:141] libmachine: (ha-844626) Calling .Close
	I0612 20:28:34.294502   32635 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:28:34.294528   32635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:28:34.294537   32635 main.go:141] libmachine: Making call to close driver server
	I0612 20:28:34.294546   32635 main.go:141] libmachine: (ha-844626) Calling .Close
	I0612 20:28:34.294550   32635 main.go:141] libmachine: (ha-844626) DBG | Closing plugin on server side
	I0612 20:28:34.294766   32635 main.go:141] libmachine: Successfully made call to close driver server
	I0612 20:28:34.294781   32635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 20:28:34.294806   32635 main.go:141] libmachine: (ha-844626) DBG | Closing plugin on server side
	I0612 20:28:34.296660   32635 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0612 20:28:34.298128   32635 addons.go:510] duration metric: took 966.190513ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0612 20:28:34.298168   32635 start.go:245] waiting for cluster config update ...
	I0612 20:28:34.298185   32635 start.go:254] writing updated cluster config ...
	I0612 20:28:34.300255   32635 out.go:177] 
	I0612 20:28:34.301433   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:28:34.301528   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:28:34.303062   32635 out.go:177] * Starting "ha-844626-m02" control-plane node in "ha-844626" cluster
	I0612 20:28:34.304493   32635 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:28:34.304525   32635 cache.go:56] Caching tarball of preloaded images
	I0612 20:28:34.304617   32635 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 20:28:34.304633   32635 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 20:28:34.304728   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:28:34.304946   32635 start.go:360] acquireMachinesLock for ha-844626-m02: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 20:28:34.305008   32635 start.go:364] duration metric: took 37.579µs to acquireMachinesLock for "ha-844626-m02"
	I0612 20:28:34.305033   32635 start.go:93] Provisioning new machine with config: &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:28:34.305134   32635 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0612 20:28:34.306787   32635 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 20:28:34.306883   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:28:34.306916   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:28:34.321078   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36187
	I0612 20:28:34.321498   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:28:34.321960   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:28:34.321979   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:28:34.322280   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:28:34.322483   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetMachineName
	I0612 20:28:34.322632   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:34.322781   32635 start.go:159] libmachine.API.Create for "ha-844626" (driver="kvm2")
	I0612 20:28:34.322805   32635 client.go:168] LocalClient.Create starting
	I0612 20:28:34.322838   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 20:28:34.322876   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:28:34.322896   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:28:34.322960   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 20:28:34.322988   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:28:34.323010   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:28:34.323038   32635 main.go:141] libmachine: Running pre-create checks...
	I0612 20:28:34.323051   32635 main.go:141] libmachine: (ha-844626-m02) Calling .PreCreateCheck
	I0612 20:28:34.323216   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetConfigRaw
	I0612 20:28:34.323562   32635 main.go:141] libmachine: Creating machine...
	I0612 20:28:34.323577   32635 main.go:141] libmachine: (ha-844626-m02) Calling .Create
	I0612 20:28:34.323694   32635 main.go:141] libmachine: (ha-844626-m02) Creating KVM machine...
	I0612 20:28:34.324707   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found existing default KVM network
	I0612 20:28:34.324850   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found existing private KVM network mk-ha-844626
	I0612 20:28:34.324957   32635 main.go:141] libmachine: (ha-844626-m02) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02 ...
	I0612 20:28:34.324977   32635 main.go:141] libmachine: (ha-844626-m02) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 20:28:34.325043   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:34.324957   33039 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:28:34.325237   32635 main.go:141] libmachine: (ha-844626-m02) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 20:28:34.563768   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:34.563644   33039 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa...
	I0612 20:28:34.685880   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:34.685763   33039 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/ha-844626-m02.rawdisk...
	I0612 20:28:34.685922   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Writing magic tar header
	I0612 20:28:34.685932   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Writing SSH key tar header
	I0612 20:28:34.685944   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:34.685889   33039 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02 ...
	I0612 20:28:34.686027   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02
	I0612 20:28:34.686056   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 20:28:34.686069   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02 (perms=drwx------)
	I0612 20:28:34.686081   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 20:28:34.686092   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 20:28:34.686104   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:28:34.686120   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 20:28:34.686136   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 20:28:34.686147   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 20:28:34.686159   32635 main.go:141] libmachine: (ha-844626-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 20:28:34.686164   32635 main.go:141] libmachine: (ha-844626-m02) Creating domain...
	I0612 20:28:34.686171   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 20:28:34.686178   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home/jenkins
	I0612 20:28:34.686183   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Checking permissions on dir: /home
	I0612 20:28:34.686193   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Skipping /home - not owner
	I0612 20:28:34.687131   32635 main.go:141] libmachine: (ha-844626-m02) define libvirt domain using xml: 
	I0612 20:28:34.687156   32635 main.go:141] libmachine: (ha-844626-m02) <domain type='kvm'>
	I0612 20:28:34.687163   32635 main.go:141] libmachine: (ha-844626-m02)   <name>ha-844626-m02</name>
	I0612 20:28:34.687179   32635 main.go:141] libmachine: (ha-844626-m02)   <memory unit='MiB'>2200</memory>
	I0612 20:28:34.687188   32635 main.go:141] libmachine: (ha-844626-m02)   <vcpu>2</vcpu>
	I0612 20:28:34.687195   32635 main.go:141] libmachine: (ha-844626-m02)   <features>
	I0612 20:28:34.687225   32635 main.go:141] libmachine: (ha-844626-m02)     <acpi/>
	I0612 20:28:34.687246   32635 main.go:141] libmachine: (ha-844626-m02)     <apic/>
	I0612 20:28:34.687257   32635 main.go:141] libmachine: (ha-844626-m02)     <pae/>
	I0612 20:28:34.687271   32635 main.go:141] libmachine: (ha-844626-m02)     
	I0612 20:28:34.687284   32635 main.go:141] libmachine: (ha-844626-m02)   </features>
	I0612 20:28:34.687297   32635 main.go:141] libmachine: (ha-844626-m02)   <cpu mode='host-passthrough'>
	I0612 20:28:34.687313   32635 main.go:141] libmachine: (ha-844626-m02)   
	I0612 20:28:34.687323   32635 main.go:141] libmachine: (ha-844626-m02)   </cpu>
	I0612 20:28:34.687334   32635 main.go:141] libmachine: (ha-844626-m02)   <os>
	I0612 20:28:34.687351   32635 main.go:141] libmachine: (ha-844626-m02)     <type>hvm</type>
	I0612 20:28:34.687374   32635 main.go:141] libmachine: (ha-844626-m02)     <boot dev='cdrom'/>
	I0612 20:28:34.687395   32635 main.go:141] libmachine: (ha-844626-m02)     <boot dev='hd'/>
	I0612 20:28:34.687411   32635 main.go:141] libmachine: (ha-844626-m02)     <bootmenu enable='no'/>
	I0612 20:28:34.687427   32635 main.go:141] libmachine: (ha-844626-m02)   </os>
	I0612 20:28:34.687451   32635 main.go:141] libmachine: (ha-844626-m02)   <devices>
	I0612 20:28:34.687462   32635 main.go:141] libmachine: (ha-844626-m02)     <disk type='file' device='cdrom'>
	I0612 20:28:34.687475   32635 main.go:141] libmachine: (ha-844626-m02)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/boot2docker.iso'/>
	I0612 20:28:34.687483   32635 main.go:141] libmachine: (ha-844626-m02)       <target dev='hdc' bus='scsi'/>
	I0612 20:28:34.687488   32635 main.go:141] libmachine: (ha-844626-m02)       <readonly/>
	I0612 20:28:34.687495   32635 main.go:141] libmachine: (ha-844626-m02)     </disk>
	I0612 20:28:34.687502   32635 main.go:141] libmachine: (ha-844626-m02)     <disk type='file' device='disk'>
	I0612 20:28:34.687511   32635 main.go:141] libmachine: (ha-844626-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 20:28:34.687520   32635 main.go:141] libmachine: (ha-844626-m02)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/ha-844626-m02.rawdisk'/>
	I0612 20:28:34.687528   32635 main.go:141] libmachine: (ha-844626-m02)       <target dev='hda' bus='virtio'/>
	I0612 20:28:34.687534   32635 main.go:141] libmachine: (ha-844626-m02)     </disk>
	I0612 20:28:34.687541   32635 main.go:141] libmachine: (ha-844626-m02)     <interface type='network'>
	I0612 20:28:34.687548   32635 main.go:141] libmachine: (ha-844626-m02)       <source network='mk-ha-844626'/>
	I0612 20:28:34.687554   32635 main.go:141] libmachine: (ha-844626-m02)       <model type='virtio'/>
	I0612 20:28:34.687560   32635 main.go:141] libmachine: (ha-844626-m02)     </interface>
	I0612 20:28:34.687567   32635 main.go:141] libmachine: (ha-844626-m02)     <interface type='network'>
	I0612 20:28:34.687573   32635 main.go:141] libmachine: (ha-844626-m02)       <source network='default'/>
	I0612 20:28:34.687580   32635 main.go:141] libmachine: (ha-844626-m02)       <model type='virtio'/>
	I0612 20:28:34.687585   32635 main.go:141] libmachine: (ha-844626-m02)     </interface>
	I0612 20:28:34.687591   32635 main.go:141] libmachine: (ha-844626-m02)     <serial type='pty'>
	I0612 20:28:34.687597   32635 main.go:141] libmachine: (ha-844626-m02)       <target port='0'/>
	I0612 20:28:34.687604   32635 main.go:141] libmachine: (ha-844626-m02)     </serial>
	I0612 20:28:34.687611   32635 main.go:141] libmachine: (ha-844626-m02)     <console type='pty'>
	I0612 20:28:34.687618   32635 main.go:141] libmachine: (ha-844626-m02)       <target type='serial' port='0'/>
	I0612 20:28:34.687623   32635 main.go:141] libmachine: (ha-844626-m02)     </console>
	I0612 20:28:34.687629   32635 main.go:141] libmachine: (ha-844626-m02)     <rng model='virtio'>
	I0612 20:28:34.687646   32635 main.go:141] libmachine: (ha-844626-m02)       <backend model='random'>/dev/random</backend>
	I0612 20:28:34.687662   32635 main.go:141] libmachine: (ha-844626-m02)     </rng>
	I0612 20:28:34.687674   32635 main.go:141] libmachine: (ha-844626-m02)     
	I0612 20:28:34.687684   32635 main.go:141] libmachine: (ha-844626-m02)     
	I0612 20:28:34.687692   32635 main.go:141] libmachine: (ha-844626-m02)   </devices>
	I0612 20:28:34.687702   32635 main.go:141] libmachine: (ha-844626-m02) </domain>
	I0612 20:28:34.687715   32635 main.go:141] libmachine: (ha-844626-m02) 
	I0612 20:28:34.694563   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:e6:9f:42 in network default
	I0612 20:28:34.695320   32635 main.go:141] libmachine: (ha-844626-m02) Ensuring networks are active...
	I0612 20:28:34.695340   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:34.695978   32635 main.go:141] libmachine: (ha-844626-m02) Ensuring network default is active
	I0612 20:28:34.696283   32635 main.go:141] libmachine: (ha-844626-m02) Ensuring network mk-ha-844626 is active
	I0612 20:28:34.696687   32635 main.go:141] libmachine: (ha-844626-m02) Getting domain xml...
	I0612 20:28:34.697350   32635 main.go:141] libmachine: (ha-844626-m02) Creating domain...
	I0612 20:28:35.897652   32635 main.go:141] libmachine: (ha-844626-m02) Waiting to get IP...
	I0612 20:28:35.898547   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:35.898977   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:35.899019   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:35.898970   33039 retry.go:31] will retry after 188.812483ms: waiting for machine to come up
	I0612 20:28:36.089381   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:36.089937   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:36.089970   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:36.089883   33039 retry.go:31] will retry after 248.337423ms: waiting for machine to come up
	I0612 20:28:36.339460   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:36.339915   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:36.339981   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:36.339859   33039 retry.go:31] will retry after 483.208215ms: waiting for machine to come up
	I0612 20:28:36.824482   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:36.825125   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:36.825153   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:36.825074   33039 retry.go:31] will retry after 448.029523ms: waiting for machine to come up
	I0612 20:28:37.274773   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:37.275250   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:37.275275   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:37.275225   33039 retry.go:31] will retry after 689.330075ms: waiting for machine to come up
	I0612 20:28:37.966768   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:37.967833   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:37.967867   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:37.967781   33039 retry.go:31] will retry after 820.730369ms: waiting for machine to come up
	I0612 20:28:38.789810   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:38.790276   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:38.790302   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:38.790220   33039 retry.go:31] will retry after 806.096624ms: waiting for machine to come up
	I0612 20:28:39.597586   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:39.598130   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:39.598156   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:39.598100   33039 retry.go:31] will retry after 971.914744ms: waiting for machine to come up
	I0612 20:28:40.571299   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:40.571718   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:40.571747   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:40.571677   33039 retry.go:31] will retry after 1.557937808s: waiting for machine to come up
	I0612 20:28:42.131638   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:42.132079   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:42.132105   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:42.132033   33039 retry.go:31] will retry after 1.545550008s: waiting for machine to come up
	I0612 20:28:43.679913   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:43.680458   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:43.680486   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:43.680399   33039 retry.go:31] will retry after 2.155457776s: waiting for machine to come up
	I0612 20:28:45.838147   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:45.838800   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:45.838837   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:45.838740   33039 retry.go:31] will retry after 2.378044585s: waiting for machine to come up
	I0612 20:28:48.220330   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:48.220887   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:48.220914   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:48.220850   33039 retry.go:31] will retry after 3.582059005s: waiting for machine to come up
	I0612 20:28:51.804217   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:51.804650   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find current IP address of domain ha-844626-m02 in network mk-ha-844626
	I0612 20:28:51.804681   32635 main.go:141] libmachine: (ha-844626-m02) DBG | I0612 20:28:51.804596   33039 retry.go:31] will retry after 5.387350068s: waiting for machine to come up
	I0612 20:28:57.195392   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.195961   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has current primary IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.195989   32635 main.go:141] libmachine: (ha-844626-m02) Found IP for machine: 192.168.39.108
	I0612 20:28:57.196002   32635 main.go:141] libmachine: (ha-844626-m02) Reserving static IP address...
	I0612 20:28:57.196424   32635 main.go:141] libmachine: (ha-844626-m02) DBG | unable to find host DHCP lease matching {name: "ha-844626-m02", mac: "52:54:00:01:79:34", ip: "192.168.39.108"} in network mk-ha-844626
	I0612 20:28:57.265323   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Getting to WaitForSSH function...
	I0612 20:28:57.265349   32635 main.go:141] libmachine: (ha-844626-m02) Reserved static IP address: 192.168.39.108
	I0612 20:28:57.265369   32635 main.go:141] libmachine: (ha-844626-m02) Waiting for SSH to be available...
	I0612 20:28:57.267928   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.268262   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:minikube Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.268292   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.268445   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Using SSH client type: external
	I0612 20:28:57.268472   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa (-rw-------)
	I0612 20:28:57.268505   32635 main.go:141] libmachine: (ha-844626-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 20:28:57.268524   32635 main.go:141] libmachine: (ha-844626-m02) DBG | About to run SSH command:
	I0612 20:28:57.268538   32635 main.go:141] libmachine: (ha-844626-m02) DBG | exit 0
	I0612 20:28:57.395110   32635 main.go:141] libmachine: (ha-844626-m02) DBG | SSH cmd err, output: <nil>: 
	I0612 20:28:57.395444   32635 main.go:141] libmachine: (ha-844626-m02) KVM machine creation complete!
	I0612 20:28:57.395735   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetConfigRaw
	I0612 20:28:57.396315   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:57.396506   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:57.396679   32635 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 20:28:57.396694   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:28:57.397968   32635 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 20:28:57.397983   32635 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 20:28:57.397991   32635 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 20:28:57.397999   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.400298   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.400617   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.400648   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.400741   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:57.400891   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.401040   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.401133   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:57.401264   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:57.401505   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:57.401518   32635 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 20:28:57.506466   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:28:57.506489   32635 main.go:141] libmachine: Detecting the provisioner...
	I0612 20:28:57.506498   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.509058   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.509414   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.509451   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.509619   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:57.509808   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.509945   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.510036   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:57.510211   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:57.510369   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:57.510379   32635 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 20:28:57.620069   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 20:28:57.620147   32635 main.go:141] libmachine: found compatible host: buildroot
	I0612 20:28:57.620157   32635 main.go:141] libmachine: Provisioning with buildroot...
	I0612 20:28:57.620164   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetMachineName
	I0612 20:28:57.620394   32635 buildroot.go:166] provisioning hostname "ha-844626-m02"
	I0612 20:28:57.620420   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetMachineName
	I0612 20:28:57.620587   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.623458   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.623898   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.623920   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.624077   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:57.624257   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.624421   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.624577   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:57.624740   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:57.624954   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:57.624974   32635 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844626-m02 && echo "ha-844626-m02" | sudo tee /etc/hostname
	I0612 20:28:57.749484   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844626-m02
	
	I0612 20:28:57.749508   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.752103   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.752525   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.752552   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.752756   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:57.752943   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.753115   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.753242   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:57.753437   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:57.753585   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:57.753600   32635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844626-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844626-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844626-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 20:28:57.868256   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:28:57.868289   32635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 20:28:57.868310   32635 buildroot.go:174] setting up certificates
	I0612 20:28:57.868322   32635 provision.go:84] configureAuth start
	I0612 20:28:57.868334   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetMachineName
	I0612 20:28:57.868621   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:28:57.870970   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.871384   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.871404   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.871578   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.873675   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.873971   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.873998   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.874119   32635 provision.go:143] copyHostCerts
	I0612 20:28:57.874150   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:28:57.874180   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 20:28:57.874188   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:28:57.874249   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 20:28:57.874321   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:28:57.874339   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 20:28:57.874345   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:28:57.874369   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 20:28:57.874411   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:28:57.874441   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 20:28:57.874447   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:28:57.874475   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 20:28:57.874523   32635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.ha-844626-m02 san=[127.0.0.1 192.168.39.108 ha-844626-m02 localhost minikube]
	I0612 20:28:57.943494   32635 provision.go:177] copyRemoteCerts
	I0612 20:28:57.943546   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 20:28:57.943573   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:57.945926   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.946234   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:57.946263   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:57.946411   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:57.946596   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:57.946739   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:57.946878   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	I0612 20:28:58.029866   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0612 20:28:58.029924   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 20:28:58.054564   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0612 20:28:58.054630   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0612 20:28:58.077787   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0612 20:28:58.077838   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 20:28:58.102564   32635 provision.go:87] duration metric: took 234.230123ms to configureAuth
	I0612 20:28:58.102588   32635 buildroot.go:189] setting minikube options for container-runtime
	I0612 20:28:58.102781   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:28:58.102856   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:58.105183   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.105582   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.105609   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.105780   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:58.105958   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.106118   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.106241   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:58.106395   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:58.106547   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:58.106560   32635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 20:28:58.369002   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 20:28:58.369044   32635 main.go:141] libmachine: Checking connection to Docker...
	I0612 20:28:58.369056   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetURL
	I0612 20:28:58.370275   32635 main.go:141] libmachine: (ha-844626-m02) DBG | Using libvirt version 6000000
	I0612 20:28:58.372493   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.372917   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.372946   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.373091   32635 main.go:141] libmachine: Docker is up and running!
	I0612 20:28:58.373106   32635 main.go:141] libmachine: Reticulating splines...
	I0612 20:28:58.373112   32635 client.go:171] duration metric: took 24.050299211s to LocalClient.Create
	I0612 20:28:58.373135   32635 start.go:167] duration metric: took 24.050353188s to libmachine.API.Create "ha-844626"
	I0612 20:28:58.373153   32635 start.go:293] postStartSetup for "ha-844626-m02" (driver="kvm2")
	I0612 20:28:58.373169   32635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 20:28:58.373192   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:58.373410   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 20:28:58.373429   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:58.375789   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.376115   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.376139   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.376262   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:58.376430   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.376585   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:58.376724   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	I0612 20:28:58.461494   32635 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 20:28:58.465756   32635 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 20:28:58.465773   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 20:28:58.465842   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 20:28:58.465945   32635 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 20:28:58.465956   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /etc/ssl/certs/214442.pem
	I0612 20:28:58.466033   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 20:28:58.475007   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:28:58.498583   32635 start.go:296] duration metric: took 125.415488ms for postStartSetup
	I0612 20:28:58.498630   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetConfigRaw
	I0612 20:28:58.499244   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:28:58.501609   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.501916   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.501943   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.502145   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:28:58.502300   32635 start.go:128] duration metric: took 24.197154786s to createHost
	I0612 20:28:58.502327   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:58.504428   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.504751   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.504779   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.504909   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:58.505058   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.505206   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.505333   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:58.505505   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:28:58.505675   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0612 20:28:58.505691   32635 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 20:28:58.611926   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224138.588199886
	
	I0612 20:28:58.611948   32635 fix.go:216] guest clock: 1718224138.588199886
	I0612 20:28:58.611956   32635 fix.go:229] Guest: 2024-06-12 20:28:58.588199886 +0000 UTC Remote: 2024-06-12 20:28:58.502310999 +0000 UTC m=+77.562759418 (delta=85.888887ms)
	I0612 20:28:58.611969   32635 fix.go:200] guest clock delta is within tolerance: 85.888887ms
	I0612 20:28:58.611974   32635 start.go:83] releasing machines lock for "ha-844626-m02", held for 24.306954637s
	I0612 20:28:58.611990   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:58.612277   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:28:58.614893   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.615331   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.615360   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.617819   32635 out.go:177] * Found network options:
	I0612 20:28:58.619210   32635 out.go:177]   - NO_PROXY=192.168.39.196
	W0612 20:28:58.620341   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 20:28:58.620364   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:58.620832   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:58.621001   32635 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:28:58.621053   32635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 20:28:58.621093   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	W0612 20:28:58.621186   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 20:28:58.621263   32635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 20:28:58.621283   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:28:58.623686   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.623949   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.624031   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.624057   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.624200   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:58.624331   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:28:58.624350   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:28:58.624390   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.624498   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:28:58.624671   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:28:58.624687   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:58.624855   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	I0612 20:28:58.624931   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:28:58.625116   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	I0612 20:28:58.876599   32635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 20:28:58.882927   32635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 20:28:58.882992   32635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 20:28:58.899597   32635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 20:28:58.899627   32635 start.go:494] detecting cgroup driver to use...
	I0612 20:28:58.899682   32635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 20:28:58.918649   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 20:28:58.935105   32635 docker.go:217] disabling cri-docker service (if available) ...
	I0612 20:28:58.935189   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 20:28:58.951300   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 20:28:58.967318   32635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 20:28:59.089527   32635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 20:28:59.248030   32635 docker.go:233] disabling docker service ...
	I0612 20:28:59.248104   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 20:28:59.262936   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 20:28:59.276351   32635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 20:28:59.401042   32635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 20:28:59.537934   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 20:28:59.552277   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 20:28:59.571272   32635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 20:28:59.571324   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.581780   32635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 20:28:59.581825   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.593899   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.605609   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.616721   32635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 20:28:59.628658   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.639910   32635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.659020   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:28:59.670101   32635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 20:28:59.681844   32635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 20:28:59.681903   32635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 20:28:59.696113   32635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 20:28:59.705625   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:28:59.831551   32635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 20:28:59.977684   32635 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 20:28:59.977760   32635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 20:28:59.982563   32635 start.go:562] Will wait 60s for crictl version
	I0612 20:28:59.982603   32635 ssh_runner.go:195] Run: which crictl
	I0612 20:28:59.986337   32635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 20:29:00.028823   32635 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 20:29:00.028916   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:29:00.057647   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:29:00.087556   32635 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 20:29:00.088913   32635 out.go:177]   - env NO_PROXY=192.168.39.196
	I0612 20:29:00.090027   32635 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:29:00.092690   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:29:00.093031   32635 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:28:48 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:29:00.093061   32635 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:29:00.093300   32635 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 20:29:00.097686   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:29:00.110349   32635 mustload.go:65] Loading cluster: ha-844626
	I0612 20:29:00.110562   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:29:00.110852   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:29:00.110876   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:29:00.125518   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38793
	I0612 20:29:00.125906   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:29:00.126358   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:29:00.126382   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:29:00.126721   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:29:00.126923   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:29:00.128554   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:29:00.128849   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:29:00.128879   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:29:00.143233   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38125
	I0612 20:29:00.143632   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:29:00.144016   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:29:00.144034   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:29:00.144329   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:29:00.144500   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:29:00.144652   32635 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626 for IP: 192.168.39.108
	I0612 20:29:00.144663   32635 certs.go:194] generating shared ca certs ...
	I0612 20:29:00.144677   32635 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:29:00.144812   32635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 20:29:00.144865   32635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 20:29:00.144877   32635 certs.go:256] generating profile certs ...
	I0612 20:29:00.144960   32635 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key
	I0612 20:29:00.145001   32635 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.059a86cd
	I0612 20:29:00.145021   32635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.059a86cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.108 192.168.39.254]
	I0612 20:29:00.584225   32635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.059a86cd ...
	I0612 20:29:00.584254   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.059a86cd: {Name:mkf7f603aba2d032d0ddac91ace726374be7c03e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:29:00.584414   32635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.059a86cd ...
	I0612 20:29:00.584428   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.059a86cd: {Name:mkaf0bf5abb5b3686773dca74b383000e538c998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:29:00.584501   32635 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.059a86cd -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt
	I0612 20:29:00.584630   32635 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.059a86cd -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key
	I0612 20:29:00.584748   32635 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key
	I0612 20:29:00.584766   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 20:29:00.584778   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0612 20:29:00.584788   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 20:29:00.584798   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 20:29:00.584811   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 20:29:00.584823   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 20:29:00.584836   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 20:29:00.584847   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 20:29:00.584893   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 20:29:00.584920   32635 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 20:29:00.584928   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 20:29:00.584950   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 20:29:00.584970   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 20:29:00.584991   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 20:29:00.585032   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:29:00.585057   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /usr/share/ca-certificates/214442.pem
	I0612 20:29:00.585071   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:29:00.585083   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem -> /usr/share/ca-certificates/21444.pem
	I0612 20:29:00.585112   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:29:00.587738   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:29:00.588068   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:29:00.588095   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:29:00.588340   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:29:00.588551   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:29:00.588725   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:29:00.588868   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:29:00.659476   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0612 20:29:00.664422   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0612 20:29:00.675398   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0612 20:29:00.679539   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0612 20:29:00.691907   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0612 20:29:00.698123   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0612 20:29:00.708858   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0612 20:29:00.712859   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0612 20:29:00.723264   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0612 20:29:00.727309   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0612 20:29:00.737484   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0612 20:29:00.741545   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0612 20:29:00.752784   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 20:29:00.779695   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 20:29:00.804527   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 20:29:00.828348   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 20:29:00.852299   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0612 20:29:00.877114   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 20:29:00.903800   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 20:29:00.927395   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 20:29:00.951574   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 20:29:00.975157   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 20:29:00.998299   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 20:29:01.021449   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0612 20:29:01.038090   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0612 20:29:01.053868   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0612 20:29:01.070256   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0612 20:29:01.087166   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0612 20:29:01.103247   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0612 20:29:01.119210   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0612 20:29:01.135180   32635 ssh_runner.go:195] Run: openssl version
	I0612 20:29:01.140742   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 20:29:01.150801   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 20:29:01.155105   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 20:29:01.155153   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 20:29:01.160789   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 20:29:01.171116   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 20:29:01.181677   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:29:01.186153   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:29:01.186187   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:29:01.191804   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 20:29:01.201928   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 20:29:01.211840   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 20:29:01.216015   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 20:29:01.216068   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 20:29:01.221627   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 20:29:01.231647   32635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 20:29:01.235662   32635 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 20:29:01.235717   32635 kubeadm.go:928] updating node {m02 192.168.39.108 8443 v1.30.1 crio true true} ...
	I0612 20:29:01.235801   32635 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844626-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 20:29:01.235825   32635 kube-vip.go:115] generating kube-vip config ...
	I0612 20:29:01.235857   32635 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 20:29:01.251220   32635 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 20:29:01.251298   32635 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0612 20:29:01.251361   32635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 20:29:01.261705   32635 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0612 20:29:01.261774   32635 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0612 20:29:01.271597   32635 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0612 20:29:01.271624   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 20:29:01.271688   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 20:29:01.271700   32635 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0612 20:29:01.271724   32635 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0612 20:29:01.277634   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0612 20:29:01.277663   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0612 20:29:35.260896   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 20:29:35.260971   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 20:29:35.267701   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0612 20:29:35.267734   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0612 20:30:04.149643   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:30:04.167195   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 20:30:04.167292   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 20:30:04.172548   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0612 20:30:04.172583   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0612 20:30:04.597711   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0612 20:30:04.609658   32635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0612 20:30:04.628872   32635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 20:30:04.648368   32635 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0612 20:30:04.668237   32635 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0612 20:30:04.673102   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:30:04.688620   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:30:04.817286   32635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:30:04.835873   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:30:04.836330   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:30:04.836397   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:30:04.851181   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0612 20:30:04.851694   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:30:04.852206   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:30:04.852230   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:30:04.852564   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:30:04.852806   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:30:04.852975   32635 start.go:316] joinCluster: &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:30:04.853091   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0612 20:30:04.853109   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:30:04.856247   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:30:04.856732   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:30:04.856761   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:30:04.856957   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:30:04.857130   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:30:04.857326   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:30:04.857490   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:30:05.018629   32635 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:30:05.018685   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hnkl3e.rou3l3k48xkgmpst --discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844626-m02 --control-plane --apiserver-advertise-address=192.168.39.108 --apiserver-bind-port=8443"
	I0612 20:30:27.495808   32635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hnkl3e.rou3l3k48xkgmpst --discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844626-m02 --control-plane --apiserver-advertise-address=192.168.39.108 --apiserver-bind-port=8443": (22.477083857s)
	I0612 20:30:27.495845   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0612 20:30:28.088244   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844626-m02 minikube.k8s.io/updated_at=2024_06_12T20_30_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=ha-844626 minikube.k8s.io/primary=false
	I0612 20:30:28.204508   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844626-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0612 20:30:28.312684   32635 start.go:318] duration metric: took 23.459705481s to joinCluster
	I0612 20:30:28.312755   32635 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:30:28.314300   32635 out.go:177] * Verifying Kubernetes components...
	I0612 20:30:28.313074   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:30:28.316145   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:30:28.565704   32635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:30:28.626283   32635 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:30:28.626628   32635 kapi.go:59] client config for ha-844626: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt", KeyFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key", CAFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfb000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0612 20:30:28.626714   32635 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.196:8443
	I0612 20:30:28.626982   32635 node_ready.go:35] waiting up to 6m0s for node "ha-844626-m02" to be "Ready" ...
	I0612 20:30:28.627077   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:28.627089   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:28.627101   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:28.627112   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:28.644797   32635 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0612 20:30:29.127366   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:29.127389   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:29.127398   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:29.127402   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:29.133873   32635 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 20:30:29.627278   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:29.627300   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:29.627308   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:29.627312   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:29.633747   32635 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 20:30:30.127280   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:30.127303   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:30.127311   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:30.127314   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:30.132068   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:30.628053   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:30.628079   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:30.628086   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:30.628095   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:30.631828   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:30.632601   32635 node_ready.go:53] node "ha-844626-m02" has status "Ready":"False"
	I0612 20:30:31.127764   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:31.127786   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:31.127793   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:31.127797   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:31.131236   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:31.627205   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:31.627228   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:31.627237   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:31.627240   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:31.630142   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:32.128033   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:32.128060   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:32.128070   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:32.128075   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:32.132429   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:32.627370   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:32.627395   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:32.627406   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:32.627411   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:32.630683   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:33.127539   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:33.127559   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:33.127566   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:33.127570   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:33.131292   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:33.132188   32635 node_ready.go:53] node "ha-844626-m02" has status "Ready":"False"
	I0612 20:30:33.627727   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:33.627749   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:33.627757   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:33.627761   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:33.631079   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:34.127319   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:34.127346   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:34.127358   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:34.127382   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:34.133781   32635 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 20:30:34.627337   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:34.627371   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:34.627383   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:34.627395   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:34.630687   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:35.127279   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:35.127310   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:35.127321   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:35.127327   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:35.131713   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:35.132298   32635 node_ready.go:53] node "ha-844626-m02" has status "Ready":"False"
	I0612 20:30:35.627486   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:35.627512   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:35.627520   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:35.627524   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:35.632682   32635 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 20:30:36.128136   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:36.128159   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:36.128169   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:36.128174   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:36.132160   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:36.628244   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:36.628274   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:36.628285   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:36.628290   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:36.631786   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:37.127909   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:37.127933   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.127942   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.127947   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.132108   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:37.132653   32635 node_ready.go:49] node "ha-844626-m02" has status "Ready":"True"
	I0612 20:30:37.132670   32635 node_ready.go:38] duration metric: took 8.505668168s for node "ha-844626-m02" to be "Ready" ...
	I0612 20:30:37.132678   32635 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:30:37.132747   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:30:37.132761   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.132767   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.132772   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.138422   32635 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 20:30:37.146045   32635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.146114   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bqzvn
	I0612 20:30:37.146123   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.146130   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.146134   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.149376   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:37.150108   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:37.150126   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.150136   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.150143   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.152828   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.153449   32635 pod_ready.go:92] pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:37.153465   32635 pod_ready.go:81] duration metric: took 7.398951ms for pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.153476   32635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.153526   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lxd6n
	I0612 20:30:37.153536   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.153546   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.153555   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.156152   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.156651   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:37.156663   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.156670   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.156674   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.158912   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.159577   32635 pod_ready.go:92] pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:37.159595   32635 pod_ready.go:81] duration metric: took 6.112307ms for pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.159606   32635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.159656   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626
	I0612 20:30:37.159666   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.159676   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.159681   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.161869   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.162386   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:37.162399   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.162404   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.162409   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.164686   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.165142   32635 pod_ready.go:92] pod "etcd-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:37.165154   32635 pod_ready.go:81] duration metric: took 5.543189ms for pod "etcd-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.165161   32635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:37.165228   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:37.165237   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.165245   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.165251   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.167587   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.168062   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:37.168074   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.168081   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.168084   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.170706   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:37.665823   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:37.665843   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.665851   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.665855   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.669692   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:37.670698   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:37.670711   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:37.670719   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:37.670731   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:37.673438   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:38.166072   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:38.166096   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:38.166108   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:38.166115   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:38.169307   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:38.169965   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:38.169983   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:38.169990   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:38.169993   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:38.173351   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:38.666212   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:38.666235   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:38.666247   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:38.666254   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:38.669485   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:38.670105   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:38.670119   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:38.670126   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:38.670130   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:38.672944   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:39.165744   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:39.165767   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:39.165774   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:39.165778   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:39.169896   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:39.171068   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:39.171088   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:39.171098   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:39.171105   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:39.174363   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:39.175307   32635 pod_ready.go:102] pod "etcd-ha-844626-m02" in "kube-system" namespace has status "Ready":"False"
	I0612 20:30:39.665878   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:39.665899   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:39.665908   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:39.665912   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:39.669567   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:39.670178   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:39.670198   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:39.670205   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:39.670209   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:39.673057   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:40.166324   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:40.166346   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.166359   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.166364   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.169985   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:40.170799   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:40.170815   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.170823   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.170829   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.173509   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:40.665467   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:30:40.665488   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.665495   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.665499   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.670274   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:40.671068   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:40.671086   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.671096   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.671102   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.675028   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:40.675988   32635 pod_ready.go:92] pod "etcd-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:40.676010   32635 pod_ready.go:81] duration metric: took 3.510836172s for pod "etcd-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.676030   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.676097   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626
	I0612 20:30:40.676107   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.676117   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.676124   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.679035   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:40.679983   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:40.679996   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.680005   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.680010   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.682159   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:40.682753   32635 pod_ready.go:92] pod "kube-apiserver-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:40.682767   32635 pod_ready.go:81] duration metric: took 6.726967ms for pod "kube-apiserver-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.682779   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.682839   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626-m02
	I0612 20:30:40.682849   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.682859   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.682869   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.685046   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:40.728756   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:40.728776   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.728785   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.728789   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.732550   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:40.733159   32635 pod_ready.go:92] pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:40.733177   32635 pod_ready.go:81] duration metric: took 50.388231ms for pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.733186   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:40.928599   32635 request.go:629] Waited for 195.361017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626
	I0612 20:30:40.928683   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626
	I0612 20:30:40.928692   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:40.928699   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:40.928704   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:40.931955   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:41.128801   32635 request.go:629] Waited for 195.731901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:41.128869   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:41.128874   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:41.128881   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:41.128889   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:41.132053   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:41.132799   32635 pod_ready.go:92] pod "kube-controller-manager-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:41.132816   32635 pod_ready.go:81] duration metric: took 399.625232ms for pod "kube-controller-manager-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:41.132825   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:41.328901   32635 request.go:629] Waited for 196.017593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:41.328966   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:41.328971   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:41.328978   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:41.328982   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:41.331865   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:41.528889   32635 request.go:629] Waited for 196.344493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:41.528936   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:41.528941   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:41.528949   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:41.528953   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:41.532714   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:41.728477   32635 request.go:629] Waited for 95.271831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:41.728538   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:41.728557   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:41.728570   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:41.728580   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:41.732043   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:41.928219   32635 request.go:629] Waited for 195.361825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:41.928276   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:41.928291   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:41.928298   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:41.928305   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:41.931981   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:42.133202   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:42.133221   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:42.133229   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:42.133234   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:42.136376   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:42.328629   32635 request.go:629] Waited for 191.365593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:42.328707   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:42.328715   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:42.328725   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:42.328730   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:42.332363   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:42.633285   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:42.633310   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:42.633320   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:42.633327   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:42.636925   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:42.728953   32635 request.go:629] Waited for 91.26389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:42.729002   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:42.729014   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:42.729031   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:42.729037   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:42.732484   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:43.133607   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:43.133629   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:43.133636   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:43.133640   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:43.137400   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:43.138180   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:43.138198   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:43.138208   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:43.138214   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:43.141025   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:43.141740   32635 pod_ready.go:102] pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace has status "Ready":"False"
	I0612 20:30:43.633276   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:43.633299   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:43.633310   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:43.633315   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:43.636399   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:43.637201   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:43.637225   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:43.637233   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:43.637237   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:43.640005   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:44.133098   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:44.133126   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:44.133139   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:44.133143   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:44.138749   32635 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 20:30:44.139503   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:44.139528   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:44.139535   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:44.139542   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:44.142326   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:44.633872   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:44.633893   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:44.633901   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:44.633904   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:44.637714   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:44.638335   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:44.638351   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:44.638362   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:44.638368   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:44.641221   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:45.133802   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:45.133824   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:45.133831   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:45.133835   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:45.136674   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:45.137376   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:45.137389   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:45.137397   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:45.137401   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:45.140982   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:45.633810   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:45.633832   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:45.633840   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:45.633843   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:45.637360   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:45.637973   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:45.637989   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:45.637999   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:45.638005   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:45.640544   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:45.641041   32635 pod_ready.go:102] pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace has status "Ready":"False"
	I0612 20:30:46.133029   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:30:46.133052   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.133059   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.133065   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.136158   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:46.137033   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:46.137047   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.137054   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.137058   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.139843   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:46.140461   32635 pod_ready.go:92] pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:46.140480   32635 pod_ready.go:81] duration metric: took 5.007648409s for pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.140489   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-69ctp" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.140558   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-69ctp
	I0612 20:30:46.140571   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.140580   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.140587   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.143798   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:46.144406   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:46.144419   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.144425   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.144435   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.153535   32635 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 20:30:46.154204   32635 pod_ready.go:92] pod "kube-proxy-69ctp" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:46.154222   32635 pod_ready.go:81] duration metric: took 13.726572ms for pod "kube-proxy-69ctp" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.154231   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7ct8" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.154287   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f7ct8
	I0612 20:30:46.154294   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.154302   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.154309   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.156591   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:30:46.328647   32635 request.go:629] Waited for 171.371767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:46.328700   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:46.328707   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.328714   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.328720   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.331955   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:46.333387   32635 pod_ready.go:92] pod "kube-proxy-f7ct8" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:46.333406   32635 pod_ready.go:81] duration metric: took 179.1699ms for pod "kube-proxy-f7ct8" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.333416   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.528928   32635 request.go:629] Waited for 195.451982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626
	I0612 20:30:46.528997   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626
	I0612 20:30:46.529005   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.529016   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.529021   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.532898   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:46.728720   32635 request.go:629] Waited for 195.095407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:46.728799   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:30:46.728807   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.728818   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.728828   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.732323   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:46.733062   32635 pod_ready.go:92] pod "kube-scheduler-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:46.733082   32635 pod_ready.go:81] duration metric: took 399.660168ms for pod "kube-scheduler-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.733096   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:46.928012   32635 request.go:629] Waited for 194.843878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m02
	I0612 20:30:46.928085   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m02
	I0612 20:30:46.928091   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:46.928099   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:46.928103   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:46.931580   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:47.128657   32635 request.go:629] Waited for 196.421042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:47.128741   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:30:47.128755   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.128764   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.128776   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.132817   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:30:47.133589   32635 pod_ready.go:92] pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:30:47.133609   32635 pod_ready.go:81] duration metric: took 400.502952ms for pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:30:47.133623   32635 pod_ready.go:38] duration metric: took 10.000934337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:30:47.133647   32635 api_server.go:52] waiting for apiserver process to appear ...
	I0612 20:30:47.133705   32635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:30:47.149876   32635 api_server.go:72] duration metric: took 18.837089852s to wait for apiserver process to appear ...
	I0612 20:30:47.149901   32635 api_server.go:88] waiting for apiserver healthz status ...
	I0612 20:30:47.149916   32635 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0612 20:30:47.157443   32635 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0612 20:30:47.157515   32635 round_trippers.go:463] GET https://192.168.39.196:8443/version
	I0612 20:30:47.157527   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.157539   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.157549   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.159286   32635 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 20:30:47.159522   32635 api_server.go:141] control plane version: v1.30.1
	I0612 20:30:47.159544   32635 api_server.go:131] duration metric: took 9.636955ms to wait for apiserver health ...
	I0612 20:30:47.159557   32635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 20:30:47.327918   32635 request.go:629] Waited for 168.289713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:30:47.327976   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:30:47.328004   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.328012   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.328017   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.335146   32635 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 20:30:47.340883   32635 system_pods.go:59] 17 kube-system pods found
	I0612 20:30:47.340913   32635 system_pods.go:61] "coredns-7db6d8ff4d-bqzvn" [b22b3ba0-1a59-4066-9db5-380986d73dca] Running
	I0612 20:30:47.340919   32635 system_pods.go:61] "coredns-7db6d8ff4d-lxd6n" [65d25d78-6fa7-4dc7-9cf2-e2fac796f194] Running
	I0612 20:30:47.340925   32635 system_pods.go:61] "etcd-ha-844626" [73812d48-addc-4957-ae24-6bbad2f5fbaa] Running
	I0612 20:30:47.340930   32635 system_pods.go:61] "etcd-ha-844626-m02" [57d89f35-94d4-4b64-a648-c440eaddef2a] Running
	I0612 20:30:47.340934   32635 system_pods.go:61] "kindnet-fz6bl" [fb946e9f-19cd-4a9f-8585-88118c840922] Running
	I0612 20:30:47.340939   32635 system_pods.go:61] "kindnet-mthnq" [49950bb0-368d-4239-ae93-04c980a8b531] Running
	I0612 20:30:47.340943   32635 system_pods.go:61] "kube-apiserver-ha-844626" [0e8ba551-e997-453a-b76f-a090a441bce4] Running
	I0612 20:30:47.340948   32635 system_pods.go:61] "kube-apiserver-ha-844626-m02" [eeaf9c1b-e433-4de6-b6e8-4c33cd467a42] Running
	I0612 20:30:47.340952   32635 system_pods.go:61] "kube-controller-manager-ha-844626" [9bca7a0a-74d1-4b9c-9915-2cf6a4eb5e52] Running
	I0612 20:30:47.340958   32635 system_pods.go:61] "kube-controller-manager-ha-844626-m02" [6e26986e-06e4-4e85-b83d-57c2254732f0] Running
	I0612 20:30:47.340963   32635 system_pods.go:61] "kube-proxy-69ctp" [c66149e8-2a69-4f1f-9ddc-5e272204e6f5] Running
	I0612 20:30:47.340968   32635 system_pods.go:61] "kube-proxy-f7ct8" [4bf3e7e1-68e8-4d0d-980b-cb5055e10365] Running
	I0612 20:30:47.340976   32635 system_pods.go:61] "kube-scheduler-ha-844626" [49238394-1429-40ce-8d74-290b1743547f] Running
	I0612 20:30:47.340986   32635 system_pods.go:61] "kube-scheduler-ha-844626-m02" [488c0960-8abb-40d1-a92e-bd4f61b5973b] Running
	I0612 20:30:47.340992   32635 system_pods.go:61] "kube-vip-ha-844626" [654fd183-21b0-4df5-b557-ed676c5ecb71] Running
	I0612 20:30:47.340999   32635 system_pods.go:61] "kube-vip-ha-844626-m02" [c7785d9d-bfc0-4f65-b853-36a7f2ba791b] Running
	I0612 20:30:47.341004   32635 system_pods.go:61] "storage-provisioner" [d94c16d7-da82-41e3-82fe-83ed6e581f69] Running
	I0612 20:30:47.341012   32635 system_pods.go:74] duration metric: took 181.444751ms to wait for pod list to return data ...
	I0612 20:30:47.341022   32635 default_sa.go:34] waiting for default service account to be created ...
	I0612 20:30:47.528373   32635 request.go:629] Waited for 187.26726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0612 20:30:47.528437   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0612 20:30:47.528443   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.528450   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.528454   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.532161   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:47.532374   32635 default_sa.go:45] found service account: "default"
	I0612 20:30:47.532392   32635 default_sa.go:55] duration metric: took 191.363691ms for default service account to be created ...
	I0612 20:30:47.532402   32635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 20:30:47.728905   32635 request.go:629] Waited for 196.437134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:30:47.728985   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:30:47.728995   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.729006   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.729013   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.734247   32635 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 20:30:47.738940   32635 system_pods.go:86] 17 kube-system pods found
	I0612 20:30:47.738961   32635 system_pods.go:89] "coredns-7db6d8ff4d-bqzvn" [b22b3ba0-1a59-4066-9db5-380986d73dca] Running
	I0612 20:30:47.738967   32635 system_pods.go:89] "coredns-7db6d8ff4d-lxd6n" [65d25d78-6fa7-4dc7-9cf2-e2fac796f194] Running
	I0612 20:30:47.738971   32635 system_pods.go:89] "etcd-ha-844626" [73812d48-addc-4957-ae24-6bbad2f5fbaa] Running
	I0612 20:30:47.738975   32635 system_pods.go:89] "etcd-ha-844626-m02" [57d89f35-94d4-4b64-a648-c440eaddef2a] Running
	I0612 20:30:47.738979   32635 system_pods.go:89] "kindnet-fz6bl" [fb946e9f-19cd-4a9f-8585-88118c840922] Running
	I0612 20:30:47.738985   32635 system_pods.go:89] "kindnet-mthnq" [49950bb0-368d-4239-ae93-04c980a8b531] Running
	I0612 20:30:47.738991   32635 system_pods.go:89] "kube-apiserver-ha-844626" [0e8ba551-e997-453a-b76f-a090a441bce4] Running
	I0612 20:30:47.738996   32635 system_pods.go:89] "kube-apiserver-ha-844626-m02" [eeaf9c1b-e433-4de6-b6e8-4c33cd467a42] Running
	I0612 20:30:47.739002   32635 system_pods.go:89] "kube-controller-manager-ha-844626" [9bca7a0a-74d1-4b9c-9915-2cf6a4eb5e52] Running
	I0612 20:30:47.739008   32635 system_pods.go:89] "kube-controller-manager-ha-844626-m02" [6e26986e-06e4-4e85-b83d-57c2254732f0] Running
	I0612 20:30:47.739012   32635 system_pods.go:89] "kube-proxy-69ctp" [c66149e8-2a69-4f1f-9ddc-5e272204e6f5] Running
	I0612 20:30:47.739017   32635 system_pods.go:89] "kube-proxy-f7ct8" [4bf3e7e1-68e8-4d0d-980b-cb5055e10365] Running
	I0612 20:30:47.739021   32635 system_pods.go:89] "kube-scheduler-ha-844626" [49238394-1429-40ce-8d74-290b1743547f] Running
	I0612 20:30:47.739025   32635 system_pods.go:89] "kube-scheduler-ha-844626-m02" [488c0960-8abb-40d1-a92e-bd4f61b5973b] Running
	I0612 20:30:47.739029   32635 system_pods.go:89] "kube-vip-ha-844626" [654fd183-21b0-4df5-b557-ed676c5ecb71] Running
	I0612 20:30:47.739032   32635 system_pods.go:89] "kube-vip-ha-844626-m02" [c7785d9d-bfc0-4f65-b853-36a7f2ba791b] Running
	I0612 20:30:47.739036   32635 system_pods.go:89] "storage-provisioner" [d94c16d7-da82-41e3-82fe-83ed6e581f69] Running
	I0612 20:30:47.739042   32635 system_pods.go:126] duration metric: took 206.634655ms to wait for k8s-apps to be running ...
	I0612 20:30:47.739051   32635 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 20:30:47.739091   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:30:47.756076   32635 system_svc.go:56] duration metric: took 17.016768ms WaitForService to wait for kubelet
	I0612 20:30:47.756104   32635 kubeadm.go:576] duration metric: took 19.443318841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:30:47.756129   32635 node_conditions.go:102] verifying NodePressure condition ...
	I0612 20:30:47.928545   32635 request.go:629] Waited for 172.345307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0612 20:30:47.928631   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0612 20:30:47.928636   32635 round_trippers.go:469] Request Headers:
	I0612 20:30:47.928644   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:30:47.928649   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:30:47.932159   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:30:47.933103   32635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:30:47.933136   32635 node_conditions.go:123] node cpu capacity is 2
	I0612 20:30:47.933159   32635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:30:47.933165   32635 node_conditions.go:123] node cpu capacity is 2
	I0612 20:30:47.933171   32635 node_conditions.go:105] duration metric: took 177.036683ms to run NodePressure ...
	I0612 20:30:47.933188   32635 start.go:240] waiting for startup goroutines ...
	I0612 20:30:47.933223   32635 start.go:254] writing updated cluster config ...
	I0612 20:30:47.935417   32635 out.go:177] 
	I0612 20:30:47.937248   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:30:47.937377   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:30:47.939120   32635 out.go:177] * Starting "ha-844626-m03" control-plane node in "ha-844626" cluster
	I0612 20:30:47.940397   32635 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:30:47.940418   32635 cache.go:56] Caching tarball of preloaded images
	I0612 20:30:47.940501   32635 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 20:30:47.940512   32635 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 20:30:47.940588   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:30:47.940905   32635 start.go:360] acquireMachinesLock for ha-844626-m03: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 20:30:47.940945   32635 start.go:364] duration metric: took 22.098µs to acquireMachinesLock for "ha-844626-m03"
	I0612 20:30:47.940964   32635 start.go:93] Provisioning new machine with config: &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:30:47.941051   32635 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0612 20:30:47.943673   32635 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 20:30:47.943766   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:30:47.943798   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:30:47.959389   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35665
	I0612 20:30:47.959846   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:30:47.960359   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:30:47.960386   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:30:47.960716   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:30:47.960906   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetMachineName
	I0612 20:30:47.961019   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:30:47.961207   32635 start.go:159] libmachine.API.Create for "ha-844626" (driver="kvm2")
	I0612 20:30:47.961235   32635 client.go:168] LocalClient.Create starting
	I0612 20:30:47.961285   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 20:30:47.961327   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:30:47.961345   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:30:47.961413   32635 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 20:30:47.961441   32635 main.go:141] libmachine: Decoding PEM data...
	I0612 20:30:47.961449   32635 main.go:141] libmachine: Parsing certificate...
	I0612 20:30:47.961465   32635 main.go:141] libmachine: Running pre-create checks...
	I0612 20:30:47.961473   32635 main.go:141] libmachine: (ha-844626-m03) Calling .PreCreateCheck
	I0612 20:30:47.961648   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetConfigRaw
	I0612 20:30:47.962039   32635 main.go:141] libmachine: Creating machine...
	I0612 20:30:47.962053   32635 main.go:141] libmachine: (ha-844626-m03) Calling .Create
	I0612 20:30:47.962192   32635 main.go:141] libmachine: (ha-844626-m03) Creating KVM machine...
	I0612 20:30:47.963639   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found existing default KVM network
	I0612 20:30:47.963788   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found existing private KVM network mk-ha-844626
	I0612 20:30:47.963942   32635 main.go:141] libmachine: (ha-844626-m03) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03 ...
	I0612 20:30:47.963969   32635 main.go:141] libmachine: (ha-844626-m03) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 20:30:47.964005   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:47.963910   33685 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:30:47.964117   32635 main.go:141] libmachine: (ha-844626-m03) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 20:30:48.183671   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:48.183542   33685 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa...
	I0612 20:30:48.278689   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:48.278547   33685 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/ha-844626-m03.rawdisk...
	I0612 20:30:48.278719   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Writing magic tar header
	I0612 20:30:48.278729   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Writing SSH key tar header
	I0612 20:30:48.278737   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:48.278674   33685 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03 ...
	I0612 20:30:48.278843   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03
	I0612 20:30:48.278861   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 20:30:48.278875   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03 (perms=drwx------)
	I0612 20:30:48.278884   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:30:48.278893   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 20:30:48.278907   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 20:30:48.278913   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 20:30:48.278923   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 20:30:48.278929   32635 main.go:141] libmachine: (ha-844626-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 20:30:48.278943   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 20:30:48.278952   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 20:30:48.278960   32635 main.go:141] libmachine: (ha-844626-m03) Creating domain...
	I0612 20:30:48.279067   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home/jenkins
	I0612 20:30:48.279096   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Checking permissions on dir: /home
	I0612 20:30:48.279130   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Skipping /home - not owner
	I0612 20:30:48.280171   32635 main.go:141] libmachine: (ha-844626-m03) define libvirt domain using xml: 
	I0612 20:30:48.280192   32635 main.go:141] libmachine: (ha-844626-m03) <domain type='kvm'>
	I0612 20:30:48.280202   32635 main.go:141] libmachine: (ha-844626-m03)   <name>ha-844626-m03</name>
	I0612 20:30:48.280211   32635 main.go:141] libmachine: (ha-844626-m03)   <memory unit='MiB'>2200</memory>
	I0612 20:30:48.280218   32635 main.go:141] libmachine: (ha-844626-m03)   <vcpu>2</vcpu>
	I0612 20:30:48.280230   32635 main.go:141] libmachine: (ha-844626-m03)   <features>
	I0612 20:30:48.280261   32635 main.go:141] libmachine: (ha-844626-m03)     <acpi/>
	I0612 20:30:48.280282   32635 main.go:141] libmachine: (ha-844626-m03)     <apic/>
	I0612 20:30:48.280293   32635 main.go:141] libmachine: (ha-844626-m03)     <pae/>
	I0612 20:30:48.280304   32635 main.go:141] libmachine: (ha-844626-m03)     
	I0612 20:30:48.280333   32635 main.go:141] libmachine: (ha-844626-m03)   </features>
	I0612 20:30:48.280356   32635 main.go:141] libmachine: (ha-844626-m03)   <cpu mode='host-passthrough'>
	I0612 20:30:48.280363   32635 main.go:141] libmachine: (ha-844626-m03)   
	I0612 20:30:48.280372   32635 main.go:141] libmachine: (ha-844626-m03)   </cpu>
	I0612 20:30:48.280380   32635 main.go:141] libmachine: (ha-844626-m03)   <os>
	I0612 20:30:48.280386   32635 main.go:141] libmachine: (ha-844626-m03)     <type>hvm</type>
	I0612 20:30:48.280395   32635 main.go:141] libmachine: (ha-844626-m03)     <boot dev='cdrom'/>
	I0612 20:30:48.280406   32635 main.go:141] libmachine: (ha-844626-m03)     <boot dev='hd'/>
	I0612 20:30:48.280415   32635 main.go:141] libmachine: (ha-844626-m03)     <bootmenu enable='no'/>
	I0612 20:30:48.280425   32635 main.go:141] libmachine: (ha-844626-m03)   </os>
	I0612 20:30:48.280433   32635 main.go:141] libmachine: (ha-844626-m03)   <devices>
	I0612 20:30:48.280443   32635 main.go:141] libmachine: (ha-844626-m03)     <disk type='file' device='cdrom'>
	I0612 20:30:48.280455   32635 main.go:141] libmachine: (ha-844626-m03)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/boot2docker.iso'/>
	I0612 20:30:48.280465   32635 main.go:141] libmachine: (ha-844626-m03)       <target dev='hdc' bus='scsi'/>
	I0612 20:30:48.280474   32635 main.go:141] libmachine: (ha-844626-m03)       <readonly/>
	I0612 20:30:48.280484   32635 main.go:141] libmachine: (ha-844626-m03)     </disk>
	I0612 20:30:48.280494   32635 main.go:141] libmachine: (ha-844626-m03)     <disk type='file' device='disk'>
	I0612 20:30:48.280504   32635 main.go:141] libmachine: (ha-844626-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 20:30:48.280514   32635 main.go:141] libmachine: (ha-844626-m03)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/ha-844626-m03.rawdisk'/>
	I0612 20:30:48.280522   32635 main.go:141] libmachine: (ha-844626-m03)       <target dev='hda' bus='virtio'/>
	I0612 20:30:48.280527   32635 main.go:141] libmachine: (ha-844626-m03)     </disk>
	I0612 20:30:48.280534   32635 main.go:141] libmachine: (ha-844626-m03)     <interface type='network'>
	I0612 20:30:48.280540   32635 main.go:141] libmachine: (ha-844626-m03)       <source network='mk-ha-844626'/>
	I0612 20:30:48.280546   32635 main.go:141] libmachine: (ha-844626-m03)       <model type='virtio'/>
	I0612 20:30:48.280552   32635 main.go:141] libmachine: (ha-844626-m03)     </interface>
	I0612 20:30:48.280563   32635 main.go:141] libmachine: (ha-844626-m03)     <interface type='network'>
	I0612 20:30:48.280576   32635 main.go:141] libmachine: (ha-844626-m03)       <source network='default'/>
	I0612 20:30:48.280588   32635 main.go:141] libmachine: (ha-844626-m03)       <model type='virtio'/>
	I0612 20:30:48.280607   32635 main.go:141] libmachine: (ha-844626-m03)     </interface>
	I0612 20:30:48.280626   32635 main.go:141] libmachine: (ha-844626-m03)     <serial type='pty'>
	I0612 20:30:48.280636   32635 main.go:141] libmachine: (ha-844626-m03)       <target port='0'/>
	I0612 20:30:48.280646   32635 main.go:141] libmachine: (ha-844626-m03)     </serial>
	I0612 20:30:48.280657   32635 main.go:141] libmachine: (ha-844626-m03)     <console type='pty'>
	I0612 20:30:48.280668   32635 main.go:141] libmachine: (ha-844626-m03)       <target type='serial' port='0'/>
	I0612 20:30:48.280676   32635 main.go:141] libmachine: (ha-844626-m03)     </console>
	I0612 20:30:48.280686   32635 main.go:141] libmachine: (ha-844626-m03)     <rng model='virtio'>
	I0612 20:30:48.280701   32635 main.go:141] libmachine: (ha-844626-m03)       <backend model='random'>/dev/random</backend>
	I0612 20:30:48.280715   32635 main.go:141] libmachine: (ha-844626-m03)     </rng>
	I0612 20:30:48.280726   32635 main.go:141] libmachine: (ha-844626-m03)     
	I0612 20:30:48.280737   32635 main.go:141] libmachine: (ha-844626-m03)     
	I0612 20:30:48.280745   32635 main.go:141] libmachine: (ha-844626-m03)   </devices>
	I0612 20:30:48.280755   32635 main.go:141] libmachine: (ha-844626-m03) </domain>
	I0612 20:30:48.280765   32635 main.go:141] libmachine: (ha-844626-m03) 
	I0612 20:30:48.287742   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:9b:b8:26 in network default
	I0612 20:30:48.288414   32635 main.go:141] libmachine: (ha-844626-m03) Ensuring networks are active...
	I0612 20:30:48.288449   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:48.289226   32635 main.go:141] libmachine: (ha-844626-m03) Ensuring network default is active
	I0612 20:30:48.289688   32635 main.go:141] libmachine: (ha-844626-m03) Ensuring network mk-ha-844626 is active
	I0612 20:30:48.290056   32635 main.go:141] libmachine: (ha-844626-m03) Getting domain xml...
	I0612 20:30:48.290712   32635 main.go:141] libmachine: (ha-844626-m03) Creating domain...
	I0612 20:30:49.530435   32635 main.go:141] libmachine: (ha-844626-m03) Waiting to get IP...
	I0612 20:30:49.531208   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:49.531694   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:49.531731   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:49.531676   33685 retry.go:31] will retry after 288.871984ms: waiting for machine to come up
	I0612 20:30:49.822409   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:49.822897   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:49.822926   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:49.822858   33685 retry.go:31] will retry after 248.487043ms: waiting for machine to come up
	I0612 20:30:50.073378   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:50.074000   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:50.074032   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:50.073942   33685 retry.go:31] will retry after 462.366809ms: waiting for machine to come up
	I0612 20:30:50.537464   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:50.537883   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:50.537920   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:50.537831   33685 retry.go:31] will retry after 483.777516ms: waiting for machine to come up
	I0612 20:30:51.023503   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:51.023968   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:51.023998   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:51.023922   33685 retry.go:31] will retry after 745.471957ms: waiting for machine to come up
	I0612 20:30:51.770915   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:51.771388   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:51.771418   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:51.771330   33685 retry.go:31] will retry after 847.558263ms: waiting for machine to come up
	I0612 20:30:52.620418   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:52.620789   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:52.620818   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:52.620736   33685 retry.go:31] will retry after 856.076838ms: waiting for machine to come up
	I0612 20:30:53.478317   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:53.478753   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:53.478782   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:53.478715   33685 retry.go:31] will retry after 1.102009532s: waiting for machine to come up
	I0612 20:30:54.582139   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:54.582598   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:54.582631   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:54.582547   33685 retry.go:31] will retry after 1.62493678s: waiting for machine to come up
	I0612 20:30:56.209482   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:56.209972   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:56.210002   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:56.209923   33685 retry.go:31] will retry after 2.048125966s: waiting for machine to come up
	I0612 20:30:58.259821   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:30:58.260459   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:30:58.260495   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:30:58.260396   33685 retry.go:31] will retry after 2.165398236s: waiting for machine to come up
	I0612 20:31:00.428290   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:00.428804   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:31:00.428829   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:31:00.428752   33685 retry.go:31] will retry after 3.00838211s: waiting for machine to come up
	I0612 20:31:03.439244   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:03.439728   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find current IP address of domain ha-844626-m03 in network mk-ha-844626
	I0612 20:31:03.439749   32635 main.go:141] libmachine: (ha-844626-m03) DBG | I0612 20:31:03.439679   33685 retry.go:31] will retry after 4.481196758s: waiting for machine to come up
	I0612 20:31:07.925066   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:07.925573   32635 main.go:141] libmachine: (ha-844626-m03) Found IP for machine: 192.168.39.76
	I0612 20:31:07.925610   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has current primary IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:07.925619   32635 main.go:141] libmachine: (ha-844626-m03) Reserving static IP address...
	I0612 20:31:07.926018   32635 main.go:141] libmachine: (ha-844626-m03) DBG | unable to find host DHCP lease matching {name: "ha-844626-m03", mac: "52:54:00:81:de:69", ip: "192.168.39.76"} in network mk-ha-844626
	I0612 20:31:08.000537   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Getting to WaitForSSH function...
	I0612 20:31:08.000565   32635 main.go:141] libmachine: (ha-844626-m03) Reserved static IP address: 192.168.39.76
	I0612 20:31:08.000577   32635 main.go:141] libmachine: (ha-844626-m03) Waiting for SSH to be available...
	I0612 20:31:08.003095   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.003569   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:minikube Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.003602   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.003791   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Using SSH client type: external
	I0612 20:31:08.003815   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa (-rw-------)
	I0612 20:31:08.003843   32635 main.go:141] libmachine: (ha-844626-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 20:31:08.003863   32635 main.go:141] libmachine: (ha-844626-m03) DBG | About to run SSH command:
	I0612 20:31:08.003876   32635 main.go:141] libmachine: (ha-844626-m03) DBG | exit 0
	I0612 20:31:08.127361   32635 main.go:141] libmachine: (ha-844626-m03) DBG | SSH cmd err, output: <nil>: 
	I0612 20:31:08.127629   32635 main.go:141] libmachine: (ha-844626-m03) KVM machine creation complete!
	I0612 20:31:08.127956   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetConfigRaw
	I0612 20:31:08.128477   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:08.128632   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:08.128760   32635 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 20:31:08.128771   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:31:08.129987   32635 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 20:31:08.130000   32635 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 20:31:08.130006   32635 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 20:31:08.130016   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.132310   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.132657   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.132689   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.132766   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:08.132971   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.133168   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.133307   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:08.133497   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:08.133692   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:08.133706   32635 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 20:31:08.234624   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:31:08.234650   32635 main.go:141] libmachine: Detecting the provisioner...
	I0612 20:31:08.234662   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.238508   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.238950   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.238980   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.239113   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:08.239307   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.239435   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.239596   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:08.239718   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:08.239899   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:08.239913   32635 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 20:31:08.344252   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 20:31:08.344327   32635 main.go:141] libmachine: found compatible host: buildroot
	I0612 20:31:08.344336   32635 main.go:141] libmachine: Provisioning with buildroot...
	I0612 20:31:08.344353   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetMachineName
	I0612 20:31:08.344580   32635 buildroot.go:166] provisioning hostname "ha-844626-m03"
	I0612 20:31:08.344594   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetMachineName
	I0612 20:31:08.344758   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.347365   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.347673   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.347700   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.347855   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:08.348041   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.348198   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.348322   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:08.348469   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:08.348621   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:08.348632   32635 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844626-m03 && echo "ha-844626-m03" | sudo tee /etc/hostname
	I0612 20:31:08.465878   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844626-m03
	
	I0612 20:31:08.465909   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.468578   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.468989   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.469019   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.469206   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:08.469432   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.469619   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.469762   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:08.469917   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:08.470071   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:08.470086   32635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844626-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844626-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844626-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 20:31:08.580790   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:31:08.580817   32635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 20:31:08.580833   32635 buildroot.go:174] setting up certificates
	I0612 20:31:08.580842   32635 provision.go:84] configureAuth start
	I0612 20:31:08.580850   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetMachineName
	I0612 20:31:08.581161   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:31:08.584514   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.584914   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.584939   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.585132   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.587586   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.587900   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.587928   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.588070   32635 provision.go:143] copyHostCerts
	I0612 20:31:08.588113   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:31:08.588155   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 20:31:08.588168   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:31:08.588241   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 20:31:08.588319   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:31:08.588339   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 20:31:08.588346   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:31:08.588371   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 20:31:08.588429   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:31:08.588446   32635 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 20:31:08.588452   32635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:31:08.588472   32635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 20:31:08.588516   32635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.ha-844626-m03 san=[127.0.0.1 192.168.39.76 ha-844626-m03 localhost minikube]
	I0612 20:31:08.985254   32635 provision.go:177] copyRemoteCerts
	I0612 20:31:08.985309   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 20:31:08.985330   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:08.987927   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.988302   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:08.988325   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:08.988518   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:08.988720   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:08.988898   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:08.989051   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:31:09.071188   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0612 20:31:09.071278   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 20:31:09.096872   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0612 20:31:09.096928   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0612 20:31:09.121719   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0612 20:31:09.121792   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 20:31:09.147728   32635 provision.go:87] duration metric: took 566.87254ms to configureAuth
	I0612 20:31:09.147762   32635 buildroot.go:189] setting minikube options for container-runtime
	I0612 20:31:09.147995   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:31:09.148098   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:09.150549   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.150883   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.150913   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.151009   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:09.151220   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.151383   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.151514   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:09.151669   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:09.151819   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:09.151833   32635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 20:31:09.429751   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 20:31:09.429783   32635 main.go:141] libmachine: Checking connection to Docker...
	I0612 20:31:09.429796   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetURL
	I0612 20:31:09.431160   32635 main.go:141] libmachine: (ha-844626-m03) DBG | Using libvirt version 6000000
	I0612 20:31:09.433450   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.433884   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.433915   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.434123   32635 main.go:141] libmachine: Docker is up and running!
	I0612 20:31:09.434135   32635 main.go:141] libmachine: Reticulating splines...
	I0612 20:31:09.434141   32635 client.go:171] duration metric: took 21.472896203s to LocalClient.Create
	I0612 20:31:09.434161   32635 start.go:167] duration metric: took 21.472955338s to libmachine.API.Create "ha-844626"
	I0612 20:31:09.434171   32635 start.go:293] postStartSetup for "ha-844626-m03" (driver="kvm2")
	I0612 20:31:09.434180   32635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 20:31:09.434195   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:09.434433   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 20:31:09.434483   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:09.436351   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.436710   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.436740   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.436809   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:09.436953   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.437111   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:09.437271   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:31:09.518260   32635 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 20:31:09.522742   32635 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 20:31:09.522764   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 20:31:09.522825   32635 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 20:31:09.522891   32635 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 20:31:09.522900   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /etc/ssl/certs/214442.pem
	I0612 20:31:09.522972   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 20:31:09.532751   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:31:09.561336   32635 start.go:296] duration metric: took 127.151212ms for postStartSetup
	I0612 20:31:09.561393   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetConfigRaw
	I0612 20:31:09.561980   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:31:09.564747   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.565107   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.565143   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.565359   32635 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:31:09.565541   32635 start.go:128] duration metric: took 21.624480809s to createHost
	I0612 20:31:09.565563   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:09.567821   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.568161   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.568189   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.568426   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:09.568623   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.568808   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.568997   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:09.569213   32635 main.go:141] libmachine: Using SSH client type: native
	I0612 20:31:09.569360   32635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0612 20:31:09.569370   32635 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 20:31:09.672998   32635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224269.643780635
	
	I0612 20:31:09.673041   32635 fix.go:216] guest clock: 1718224269.643780635
	I0612 20:31:09.673051   32635 fix.go:229] Guest: 2024-06-12 20:31:09.643780635 +0000 UTC Remote: 2024-06-12 20:31:09.565552821 +0000 UTC m=+208.626001239 (delta=78.227814ms)
	I0612 20:31:09.673074   32635 fix.go:200] guest clock delta is within tolerance: 78.227814ms
	I0612 20:31:09.673085   32635 start.go:83] releasing machines lock for "ha-844626-m03", held for 21.732129511s
	I0612 20:31:09.673109   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:09.673368   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:31:09.675736   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.676137   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.676163   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.678666   32635 out.go:177] * Found network options:
	I0612 20:31:09.680298   32635 out.go:177]   - NO_PROXY=192.168.39.196,192.168.39.108
	W0612 20:31:09.681788   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 20:31:09.681811   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 20:31:09.681823   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:09.682457   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:09.682640   32635 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:31:09.682709   32635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 20:31:09.682751   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	W0612 20:31:09.683033   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 20:31:09.683056   32635 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 20:31:09.683135   32635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 20:31:09.683155   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:31:09.685451   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.685788   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.685813   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.685887   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.685998   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:09.686219   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.686385   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:09.686449   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:09.686476   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:09.686567   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:31:09.686572   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:31:09.686689   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:31:09.686853   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:31:09.686993   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:31:09.920572   32635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 20:31:09.927596   32635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 20:31:09.927673   32635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 20:31:09.944808   32635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 20:31:09.944832   32635 start.go:494] detecting cgroup driver to use...
	I0612 20:31:09.944897   32635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 20:31:09.962865   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 20:31:09.979533   32635 docker.go:217] disabling cri-docker service (if available) ...
	I0612 20:31:09.979586   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 20:31:09.994509   32635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 20:31:10.010483   32635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 20:31:10.133393   32635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 20:31:10.309888   32635 docker.go:233] disabling docker service ...
	I0612 20:31:10.309964   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 20:31:10.327760   32635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 20:31:10.342124   32635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 20:31:10.472337   32635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 20:31:10.599790   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 20:31:10.615120   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 20:31:10.635337   32635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 20:31:10.635413   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.646919   32635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 20:31:10.646994   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.658588   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.670406   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.681737   32635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 20:31:10.694481   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.706838   32635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.725071   32635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:31:10.736339   32635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 20:31:10.746185   32635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 20:31:10.746232   32635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 20:31:10.759865   32635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 20:31:10.769901   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:31:10.891233   32635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 20:31:11.056415   32635 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 20:31:11.056500   32635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 20:31:11.061865   32635 start.go:562] Will wait 60s for crictl version
	I0612 20:31:11.061925   32635 ssh_runner.go:195] Run: which crictl
	I0612 20:31:11.065846   32635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 20:31:11.109896   32635 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 20:31:11.109972   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:31:11.139063   32635 ssh_runner.go:195] Run: crio --version
	I0612 20:31:11.170476   32635 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 20:31:11.171902   32635 out.go:177]   - env NO_PROXY=192.168.39.196
	I0612 20:31:11.173186   32635 out.go:177]   - env NO_PROXY=192.168.39.196,192.168.39.108
	I0612 20:31:11.174409   32635 main.go:141] libmachine: (ha-844626-m03) Calling .GetIP
	I0612 20:31:11.177335   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:11.177685   32635 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:31:11.177714   32635 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:31:11.177934   32635 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 20:31:11.182119   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:31:11.195361   32635 mustload.go:65] Loading cluster: ha-844626
	I0612 20:31:11.195625   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:31:11.195944   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:31:11.195985   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:31:11.211009   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0612 20:31:11.211462   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:31:11.211950   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:31:11.211983   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:31:11.212314   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:31:11.212509   32635 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:31:11.213918   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:31:11.214189   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:31:11.214221   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:31:11.229954   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41555
	I0612 20:31:11.230381   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:31:11.230898   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:31:11.230923   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:31:11.231263   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:31:11.231484   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:31:11.231654   32635 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626 for IP: 192.168.39.76
	I0612 20:31:11.231667   32635 certs.go:194] generating shared ca certs ...
	I0612 20:31:11.231689   32635 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:31:11.231860   32635 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 20:31:11.231917   32635 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 20:31:11.231931   32635 certs.go:256] generating profile certs ...
	I0612 20:31:11.232022   32635 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key
	I0612 20:31:11.232051   32635 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.a557c0af
	I0612 20:31:11.232079   32635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.a557c0af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.108 192.168.39.76 192.168.39.254]
	I0612 20:31:11.614498   32635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.a557c0af ...
	I0612 20:31:11.614528   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.a557c0af: {Name:mkb1a6c2268debdda293d42197a6a0500f29d2e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:31:11.614689   32635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.a557c0af ...
	I0612 20:31:11.614700   32635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.a557c0af: {Name:mka8804460e33713c2d81479b819d02daff8d551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:31:11.614764   32635 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.a557c0af -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt
	I0612 20:31:11.614888   32635 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.a557c0af -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key
	I0612 20:31:11.614999   32635 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key
	I0612 20:31:11.615014   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 20:31:11.615027   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0612 20:31:11.615042   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 20:31:11.615052   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 20:31:11.615061   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 20:31:11.615069   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 20:31:11.615080   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 20:31:11.615088   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 20:31:11.615130   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 20:31:11.615157   32635 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 20:31:11.615164   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 20:31:11.615208   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 20:31:11.615239   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 20:31:11.615261   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 20:31:11.615328   32635 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:31:11.615367   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:31:11.615382   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem -> /usr/share/ca-certificates/21444.pem
	I0612 20:31:11.615396   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /usr/share/ca-certificates/214442.pem
	I0612 20:31:11.615427   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:31:11.618513   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:31:11.618908   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:31:11.618934   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:31:11.619093   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:31:11.619309   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:31:11.619466   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:31:11.619607   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:31:11.695564   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0612 20:31:11.701753   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0612 20:31:11.714706   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0612 20:31:11.719619   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0612 20:31:11.731960   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0612 20:31:11.736481   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0612 20:31:11.746492   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0612 20:31:11.751454   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0612 20:31:11.763416   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0612 20:31:11.768272   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0612 20:31:11.779490   32635 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0612 20:31:11.783822   32635 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0612 20:31:11.795612   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 20:31:11.822551   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 20:31:11.848718   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 20:31:11.873366   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 20:31:11.897994   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0612 20:31:11.923909   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 20:31:11.950870   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 20:31:11.977087   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 20:31:12.002515   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 20:31:12.029115   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 20:31:12.056793   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 20:31:12.083280   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0612 20:31:12.101473   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0612 20:31:12.120030   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0612 20:31:12.138498   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0612 20:31:12.156187   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0612 20:31:12.176290   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0612 20:31:12.194850   32635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0612 20:31:12.212575   32635 ssh_runner.go:195] Run: openssl version
	I0612 20:31:12.218789   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 20:31:12.229576   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 20:31:12.234167   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 20:31:12.234218   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 20:31:12.241617   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 20:31:12.253094   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 20:31:12.264132   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 20:31:12.268860   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 20:31:12.268928   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 20:31:12.275059   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 20:31:12.286432   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 20:31:12.298994   32635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:31:12.303731   32635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:31:12.303775   32635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:31:12.310345   32635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 20:31:12.324673   32635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 20:31:12.329313   32635 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 20:31:12.329386   32635 kubeadm.go:928] updating node {m03 192.168.39.76 8443 v1.30.1 crio true true} ...
	I0612 20:31:12.329470   32635 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844626-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 20:31:12.329507   32635 kube-vip.go:115] generating kube-vip config ...
	I0612 20:31:12.329551   32635 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 20:31:12.350550   32635 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 20:31:12.350611   32635 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0612 20:31:12.350666   32635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 20:31:12.364956   32635 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0612 20:31:12.365009   32635 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0612 20:31:12.378412   32635 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0612 20:31:12.378441   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 20:31:12.378444   32635 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0612 20:31:12.378444   32635 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0612 20:31:12.378468   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 20:31:12.378503   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 20:31:12.378506   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:31:12.378530   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 20:31:12.397449   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0612 20:31:12.397461   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0612 20:31:12.397496   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0612 20:31:12.397503   32635 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 20:31:12.397518   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0612 20:31:12.397578   32635 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 20:31:12.419400   32635 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0612 20:31:12.419445   32635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0612 20:31:13.323865   32635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0612 20:31:13.336087   32635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0612 20:31:13.354818   32635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 20:31:13.372711   32635 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0612 20:31:13.390571   32635 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0612 20:31:13.394598   32635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 20:31:13.407397   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:31:13.523697   32635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:31:13.541507   32635 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:31:13.541969   32635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:31:13.542025   32635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:31:13.558836   32635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0612 20:31:13.559295   32635 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:31:13.559976   32635 main.go:141] libmachine: Using API Version  1
	I0612 20:31:13.560014   32635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:31:13.560375   32635 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:31:13.560593   32635 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:31:13.560770   32635 start.go:316] joinCluster: &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:31:13.560889   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0612 20:31:13.560909   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:31:13.564005   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:31:13.564508   32635 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:31:13.564538   32635 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:31:13.564700   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:31:13.564887   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:31:13.565045   32635 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:31:13.565169   32635 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:31:13.721863   32635 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:31:13.721910   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nxb23r.suyi54h7mrjhpsua --discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844626-m03 --control-plane --apiserver-advertise-address=192.168.39.76 --apiserver-bind-port=8443"
	I0612 20:31:37.421067   32635 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nxb23r.suyi54h7mrjhpsua --discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-844626-m03 --control-plane --apiserver-advertise-address=192.168.39.76 --apiserver-bind-port=8443": (23.699126175s)
	I0612 20:31:37.421105   32635 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0612 20:31:37.987126   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-844626-m03 minikube.k8s.io/updated_at=2024_06_12T20_31_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=ha-844626 minikube.k8s.io/primary=false
	I0612 20:31:38.124611   32635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-844626-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0612 20:31:38.230403   32635 start.go:318] duration metric: took 24.669630386s to joinCluster
	I0612 20:31:38.230494   32635 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 20:31:38.231832   32635 out.go:177] * Verifying Kubernetes components...
	I0612 20:31:38.230765   32635 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:31:38.233162   32635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:31:38.490906   32635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:31:38.526483   32635 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:31:38.526721   32635 kapi.go:59] client config for ha-844626: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.crt", KeyFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key", CAFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfb000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0612 20:31:38.526802   32635 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.196:8443
	I0612 20:31:38.527031   32635 node_ready.go:35] waiting up to 6m0s for node "ha-844626-m03" to be "Ready" ...
	I0612 20:31:38.527106   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:38.527116   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:38.527128   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:38.527145   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:38.530680   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:39.027416   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:39.027443   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:39.027454   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:39.027459   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:39.031781   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:39.528068   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:39.528094   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:39.528107   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:39.528111   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:39.531692   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:40.028125   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:40.028154   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:40.028161   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:40.028165   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:40.066785   32635 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0612 20:31:40.527322   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:40.527343   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:40.527351   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:40.527356   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:40.531343   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:40.531992   32635 node_ready.go:53] node "ha-844626-m03" has status "Ready":"False"
	I0612 20:31:41.027772   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:41.027801   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:41.027810   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:41.027815   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:41.031058   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:41.527326   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:41.527345   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:41.527353   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:41.527358   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:41.531197   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:42.028267   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:42.028294   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:42.028306   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:42.028311   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:42.032108   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:42.528068   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:42.528111   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:42.528126   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:42.528132   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:42.532649   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:42.534388   32635 node_ready.go:53] node "ha-844626-m03" has status "Ready":"False"
	I0612 20:31:43.028177   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:43.028202   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:43.028212   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:43.028220   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:43.031990   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:43.527286   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:43.527308   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:43.527316   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:43.527320   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:43.531214   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:44.027723   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:44.027811   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:44.027836   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:44.027851   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:44.031620   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:44.528016   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:44.528049   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:44.528061   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:44.528069   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:44.532435   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:45.027915   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:45.027938   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:45.027946   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:45.027950   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:45.032028   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:45.032714   32635 node_ready.go:53] node "ha-844626-m03" has status "Ready":"False"
	I0612 20:31:45.528113   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:45.528134   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:45.528142   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:45.528145   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:45.531604   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:46.027769   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:46.027795   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.027806   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.027812   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.031128   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:46.527669   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:46.527697   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.527709   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.527715   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.531779   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:46.532389   32635 node_ready.go:49] node "ha-844626-m03" has status "Ready":"True"
	I0612 20:31:46.532414   32635 node_ready.go:38] duration metric: took 8.005364342s for node "ha-844626-m03" to be "Ready" ...
	I0612 20:31:46.532428   32635 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:31:46.532495   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:31:46.532509   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.532519   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.532525   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.540139   32635 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 20:31:46.547810   32635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.547885   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bqzvn
	I0612 20:31:46.547893   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.547900   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.547905   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.550761   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.551415   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:46.551429   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.551435   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.551439   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.554217   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.554817   32635 pod_ready.go:92] pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:46.554840   32635 pod_ready.go:81] duration metric: took 7.00561ms for pod "coredns-7db6d8ff4d-bqzvn" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.554851   32635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.554913   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lxd6n
	I0612 20:31:46.554923   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.554933   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.554938   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.557496   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.558334   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:46.558348   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.558355   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.558359   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.560530   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.561126   32635 pod_ready.go:92] pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:46.561142   32635 pod_ready.go:81] duration metric: took 6.284183ms for pod "coredns-7db6d8ff4d-lxd6n" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.561149   32635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.561200   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626
	I0612 20:31:46.561208   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.561215   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.561218   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.563744   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.564320   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:46.564332   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.564338   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.564342   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.566807   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.567330   32635 pod_ready.go:92] pod "etcd-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:46.567345   32635 pod_ready.go:81] duration metric: took 6.19023ms for pod "etcd-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.567352   32635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.567402   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m02
	I0612 20:31:46.567412   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.567423   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.567431   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.569759   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.570287   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:46.570302   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.570311   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.570316   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.572958   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:46.573397   32635 pod_ready.go:92] pod "etcd-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:46.573411   32635 pod_ready.go:81] duration metric: took 6.053668ms for pod "etcd-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.573419   32635 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:46.727724   32635 request.go:629] Waited for 154.232817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:46.727789   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:46.727796   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.727806   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.727818   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.731086   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:46.928232   32635 request.go:629] Waited for 196.34772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:46.928290   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:46.928295   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:46.928304   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:46.928308   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:46.933132   32635 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 20:31:47.128254   32635 request.go:629] Waited for 54.231002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:47.128320   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:47.128327   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:47.128339   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:47.128348   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:47.131582   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:47.328210   32635 request.go:629] Waited for 195.396597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:47.328302   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:47.328313   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:47.328323   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:47.328329   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:47.331707   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:47.574333   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:47.574356   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:47.574363   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:47.574367   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:47.577739   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:47.727817   32635 request.go:629] Waited for 149.225733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:47.727883   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:47.727890   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:47.727900   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:47.727906   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:47.731659   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:48.073962   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:48.073983   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:48.073990   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:48.073994   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:48.077082   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:48.128024   32635 request.go:629] Waited for 50.252672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:48.128192   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:48.128213   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:48.128225   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:48.128235   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:48.131715   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:48.574112   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:48.574133   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:48.574141   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:48.574145   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:48.577985   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:48.578520   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:48.578534   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:48.578541   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:48.578545   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:48.581487   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:48.581879   32635 pod_ready.go:102] pod "etcd-ha-844626-m03" in "kube-system" namespace has status "Ready":"False"
	I0612 20:31:49.074361   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:49.074385   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:49.074393   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:49.074398   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:49.077684   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:49.078478   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:49.078491   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:49.078498   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:49.078502   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:49.081408   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:49.573751   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:49.573774   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:49.573781   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:49.573786   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:49.577153   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:49.577967   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:49.577981   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:49.577988   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:49.577993   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:49.580712   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:50.074042   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:50.074064   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:50.074072   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:50.074076   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:50.077484   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:50.078347   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:50.078373   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:50.078381   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:50.078385   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:50.081252   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:50.574456   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:50.574478   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:50.574487   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:50.574490   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:50.578229   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:50.579085   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:50.579101   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:50.579108   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:50.579112   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:50.581689   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:50.582205   32635 pod_ready.go:102] pod "etcd-ha-844626-m03" in "kube-system" namespace has status "Ready":"False"
	I0612 20:31:51.073632   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:51.073656   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:51.073664   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:51.073668   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:51.077129   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:51.077690   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:51.077707   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:51.077717   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:51.077722   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:51.080226   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:51.574347   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:51.574367   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:51.574375   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:51.574380   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:51.578352   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:51.578938   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:51.578954   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:51.578963   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:51.578967   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:51.582800   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.073848   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-844626-m03
	I0612 20:31:52.073876   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.073887   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.073891   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.077582   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.078469   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:52.078490   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.078501   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.078505   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.081525   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.082718   32635 pod_ready.go:92] pod "etcd-ha-844626-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:52.082740   32635 pod_ready.go:81] duration metric: took 5.509311762s for pod "etcd-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.082763   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.082830   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626
	I0612 20:31:52.082841   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.082851   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.082862   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.085626   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:52.086373   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:52.086389   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.086396   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.086399   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.089053   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:52.089541   32635 pod_ready.go:92] pod "kube-apiserver-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:52.089556   32635 pod_ready.go:81] duration metric: took 6.782641ms for pod "kube-apiserver-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.089564   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.089611   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626-m02
	I0612 20:31:52.089618   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.089625   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.089631   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.093324   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.128316   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:52.128342   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.128354   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.128362   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.132258   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.132723   32635 pod_ready.go:92] pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:52.132740   32635 pod_ready.go:81] duration metric: took 43.169177ms for pod "kube-apiserver-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.132748   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.327995   32635 request.go:629] Waited for 195.172189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626-m03
	I0612 20:31:52.328054   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-844626-m03
	I0612 20:31:52.328060   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.328069   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.328079   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.330878   32635 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 20:31:52.527844   32635 request.go:629] Waited for 196.286481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:52.527915   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:52.527924   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.527934   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.527941   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.530973   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.531519   32635 pod_ready.go:92] pod "kube-apiserver-ha-844626-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:52.531545   32635 pod_ready.go:81] duration metric: took 398.790061ms for pod "kube-apiserver-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.531558   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.728581   32635 request.go:629] Waited for 196.949195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626
	I0612 20:31:52.728634   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626
	I0612 20:31:52.728639   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.728646   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.728649   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.731995   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.928138   32635 request.go:629] Waited for 195.346578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:52.928211   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:52.928216   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:52.928224   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:52.928229   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:52.931855   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:52.932808   32635 pod_ready.go:92] pod "kube-controller-manager-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:52.932827   32635 pod_ready.go:81] duration metric: took 401.260741ms for pod "kube-controller-manager-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:52.932835   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:53.128301   32635 request.go:629] Waited for 195.41004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:31:53.128390   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m02
	I0612 20:31:53.128398   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:53.128407   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:53.128412   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:53.132328   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:53.328286   32635 request.go:629] Waited for 195.374363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:53.328341   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:53.328348   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:53.328355   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:53.328361   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:53.332028   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:53.332717   32635 pod_ready.go:92] pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:53.332738   32635 pod_ready.go:81] duration metric: took 399.896251ms for pod "kube-controller-manager-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:53.332747   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:53.527674   32635 request.go:629] Waited for 194.858927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m03
	I0612 20:31:53.527754   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-844626-m03
	I0612 20:31:53.527764   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:53.527770   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:53.527776   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:53.531048   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:53.728358   32635 request.go:629] Waited for 196.372417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:53.728412   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:53.728417   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:53.728425   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:53.728430   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:53.732421   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:53.734734   32635 pod_ready.go:92] pod "kube-controller-manager-ha-844626-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:53.734760   32635 pod_ready.go:81] duration metric: took 402.005437ms for pod "kube-controller-manager-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:53.734777   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2clg8" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:53.927774   32635 request.go:629] Waited for 192.919944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2clg8
	I0612 20:31:53.927831   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2clg8
	I0612 20:31:53.927836   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:53.927843   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:53.927849   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:53.931507   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:54.128530   32635 request.go:629] Waited for 196.27683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:54.128599   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:54.128606   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:54.128616   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:54.128622   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:54.134543   32635 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 20:31:54.135601   32635 pod_ready.go:92] pod "kube-proxy-2clg8" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:54.135630   32635 pod_ready.go:81] duration metric: took 400.844763ms for pod "kube-proxy-2clg8" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:54.135644   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-69ctp" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:54.328620   32635 request.go:629] Waited for 192.902619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-69ctp
	I0612 20:31:54.328686   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-69ctp
	I0612 20:31:54.328693   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:54.328701   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:54.328705   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:54.332062   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:54.528054   32635 request.go:629] Waited for 195.328764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:54.528119   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:54.528126   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:54.528133   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:54.528141   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:54.531837   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:54.532356   32635 pod_ready.go:92] pod "kube-proxy-69ctp" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:54.532375   32635 pod_ready.go:81] duration metric: took 396.724238ms for pod "kube-proxy-69ctp" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:54.532384   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7ct8" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:54.728378   32635 request.go:629] Waited for 195.936765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f7ct8
	I0612 20:31:54.728450   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f7ct8
	I0612 20:31:54.728458   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:54.728465   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:54.728472   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:54.731856   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:54.927763   32635 request.go:629] Waited for 195.286396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:54.927865   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:54.927880   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:54.927889   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:54.927899   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:54.931389   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:54.931956   32635 pod_ready.go:92] pod "kube-proxy-f7ct8" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:54.931976   32635 pod_ready.go:81] duration metric: took 399.586497ms for pod "kube-proxy-f7ct8" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:54.931985   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:55.128039   32635 request.go:629] Waited for 195.996524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626
	I0612 20:31:55.128099   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626
	I0612 20:31:55.128105   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:55.128122   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:55.128129   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:55.131689   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:55.328341   32635 request.go:629] Waited for 195.800766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:55.328431   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626
	I0612 20:31:55.328443   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:55.328453   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:55.328460   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:55.332328   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:55.333051   32635 pod_ready.go:92] pod "kube-scheduler-ha-844626" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:55.333070   32635 pod_ready.go:81] duration metric: took 401.077538ms for pod "kube-scheduler-ha-844626" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:55.333082   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:55.528131   32635 request.go:629] Waited for 194.985749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m02
	I0612 20:31:55.528203   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m02
	I0612 20:31:55.528208   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:55.528215   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:55.528219   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:55.532123   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:55.728034   32635 request.go:629] Waited for 195.369687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:55.728095   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m02
	I0612 20:31:55.728102   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:55.728115   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:55.728126   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:55.731299   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:55.731949   32635 pod_ready.go:92] pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:55.731969   32635 pod_ready.go:81] duration metric: took 398.877951ms for pod "kube-scheduler-ha-844626-m02" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:55.731978   32635 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:55.928038   32635 request.go:629] Waited for 195.972809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m03
	I0612 20:31:55.928129   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-844626-m03
	I0612 20:31:55.928141   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:55.928153   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:55.928164   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:55.931701   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:56.128003   32635 request.go:629] Waited for 195.339584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:56.128092   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-844626-m03
	I0612 20:31:56.128106   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.128116   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.128125   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.131663   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:56.132326   32635 pod_ready.go:92] pod "kube-scheduler-ha-844626-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 20:31:56.132350   32635 pod_ready.go:81] duration metric: took 400.363545ms for pod "kube-scheduler-ha-844626-m03" in "kube-system" namespace to be "Ready" ...
	I0612 20:31:56.132365   32635 pod_ready.go:38] duration metric: took 9.599925264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 20:31:56.132387   32635 api_server.go:52] waiting for apiserver process to appear ...
	I0612 20:31:56.132450   32635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:31:56.148716   32635 api_server.go:72] duration metric: took 17.918187765s to wait for apiserver process to appear ...
	I0612 20:31:56.148747   32635 api_server.go:88] waiting for apiserver healthz status ...
	I0612 20:31:56.148767   32635 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0612 20:31:56.155111   32635 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0612 20:31:56.155198   32635 round_trippers.go:463] GET https://192.168.39.196:8443/version
	I0612 20:31:56.155208   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.155216   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.155219   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.155969   32635 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 20:31:56.156023   32635 api_server.go:141] control plane version: v1.30.1
	I0612 20:31:56.156036   32635 api_server.go:131] duration metric: took 7.282834ms to wait for apiserver health ...
	I0612 20:31:56.156044   32635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 20:31:56.328332   32635 request.go:629] Waited for 172.226629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:31:56.328397   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:31:56.328402   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.328411   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.328422   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.334778   32635 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 20:31:56.341135   32635 system_pods.go:59] 24 kube-system pods found
	I0612 20:31:56.341160   32635 system_pods.go:61] "coredns-7db6d8ff4d-bqzvn" [b22b3ba0-1a59-4066-9db5-380986d73dca] Running
	I0612 20:31:56.341164   32635 system_pods.go:61] "coredns-7db6d8ff4d-lxd6n" [65d25d78-6fa7-4dc7-9cf2-e2fac796f194] Running
	I0612 20:31:56.341168   32635 system_pods.go:61] "etcd-ha-844626" [73812d48-addc-4957-ae24-6bbad2f5fbaa] Running
	I0612 20:31:56.341171   32635 system_pods.go:61] "etcd-ha-844626-m02" [57d89f35-94d4-4b64-a648-c440eaddef2a] Running
	I0612 20:31:56.341174   32635 system_pods.go:61] "etcd-ha-844626-m03" [663349bf-770f-4ea2-acf1-9fef6dd30299] Running
	I0612 20:31:56.341177   32635 system_pods.go:61] "kindnet-8hdxz" [26fbb25f-70b2-41bc-809a-0f8ba75a8432] Running
	I0612 20:31:56.341180   32635 system_pods.go:61] "kindnet-fz6bl" [fb946e9f-19cd-4a9f-8585-88118c840922] Running
	I0612 20:31:56.341183   32635 system_pods.go:61] "kindnet-mthnq" [49950bb0-368d-4239-ae93-04c980a8b531] Running
	I0612 20:31:56.341186   32635 system_pods.go:61] "kube-apiserver-ha-844626" [0e8ba551-e997-453a-b76f-a090a441bce4] Running
	I0612 20:31:56.341189   32635 system_pods.go:61] "kube-apiserver-ha-844626-m02" [eeaf9c1b-e433-4de6-b6e8-4c33cd467a42] Running
	I0612 20:31:56.341192   32635 system_pods.go:61] "kube-apiserver-ha-844626-m03" [5f530a0a-cc60-4724-b3fa-4525884da5e8] Running
	I0612 20:31:56.341195   32635 system_pods.go:61] "kube-controller-manager-ha-844626" [9bca7a0a-74d1-4b9c-9915-2cf6a4eb5e52] Running
	I0612 20:31:56.341198   32635 system_pods.go:61] "kube-controller-manager-ha-844626-m02" [6e26986e-06e4-4e85-b83d-57c2254732f0] Running
	I0612 20:31:56.341201   32635 system_pods.go:61] "kube-controller-manager-ha-844626-m03" [0df52c5e-a186-4b14-a5d4-bb6d5190bac0] Running
	I0612 20:31:56.341204   32635 system_pods.go:61] "kube-proxy-2clg8" [9e4dd97c-794a-4f29-bc12-f7892e5fcfd4] Running
	I0612 20:31:56.341208   32635 system_pods.go:61] "kube-proxy-69ctp" [c66149e8-2a69-4f1f-9ddc-5e272204e6f5] Running
	I0612 20:31:56.341210   32635 system_pods.go:61] "kube-proxy-f7ct8" [4bf3e7e1-68e8-4d0d-980b-cb5055e10365] Running
	I0612 20:31:56.341213   32635 system_pods.go:61] "kube-scheduler-ha-844626" [49238394-1429-40ce-8d74-290b1743547f] Running
	I0612 20:31:56.341216   32635 system_pods.go:61] "kube-scheduler-ha-844626-m02" [488c0960-8abb-40d1-a92e-bd4f61b5973b] Running
	I0612 20:31:56.341219   32635 system_pods.go:61] "kube-scheduler-ha-844626-m03" [2ec2f277-0a72-4937-8591-28ca2822e98d] Running
	I0612 20:31:56.341222   32635 system_pods.go:61] "kube-vip-ha-844626" [654fd183-21b0-4df5-b557-ed676c5ecb71] Running
	I0612 20:31:56.341227   32635 system_pods.go:61] "kube-vip-ha-844626-m02" [c7785d9d-bfc0-4f65-b853-36a7f2ba791b] Running
	I0612 20:31:56.341234   32635 system_pods.go:61] "kube-vip-ha-844626-m03" [4207cddd-6eb3-40c6-be2c-ac895964aa0d] Running
	I0612 20:31:56.341239   32635 system_pods.go:61] "storage-provisioner" [d94c16d7-da82-41e3-82fe-83ed6e581f69] Running
	I0612 20:31:56.341247   32635 system_pods.go:74] duration metric: took 185.195643ms to wait for pod list to return data ...
	I0612 20:31:56.341260   32635 default_sa.go:34] waiting for default service account to be created ...
	I0612 20:31:56.528660   32635 request.go:629] Waited for 187.33133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0612 20:31:56.528711   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0612 20:31:56.528717   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.528725   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.528732   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.532203   32635 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 20:31:56.532352   32635 default_sa.go:45] found service account: "default"
	I0612 20:31:56.532374   32635 default_sa.go:55] duration metric: took 191.105869ms for default service account to be created ...
	I0612 20:31:56.532384   32635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 20:31:56.727726   32635 request.go:629] Waited for 195.277738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:31:56.727804   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0612 20:31:56.727816   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.727826   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.727836   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.737456   32635 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 20:31:56.744345   32635 system_pods.go:86] 24 kube-system pods found
	I0612 20:31:56.744371   32635 system_pods.go:89] "coredns-7db6d8ff4d-bqzvn" [b22b3ba0-1a59-4066-9db5-380986d73dca] Running
	I0612 20:31:56.744378   32635 system_pods.go:89] "coredns-7db6d8ff4d-lxd6n" [65d25d78-6fa7-4dc7-9cf2-e2fac796f194] Running
	I0612 20:31:56.744382   32635 system_pods.go:89] "etcd-ha-844626" [73812d48-addc-4957-ae24-6bbad2f5fbaa] Running
	I0612 20:31:56.744388   32635 system_pods.go:89] "etcd-ha-844626-m02" [57d89f35-94d4-4b64-a648-c440eaddef2a] Running
	I0612 20:31:56.744395   32635 system_pods.go:89] "etcd-ha-844626-m03" [663349bf-770f-4ea2-acf1-9fef6dd30299] Running
	I0612 20:31:56.744401   32635 system_pods.go:89] "kindnet-8hdxz" [26fbb25f-70b2-41bc-809a-0f8ba75a8432] Running
	I0612 20:31:56.744411   32635 system_pods.go:89] "kindnet-fz6bl" [fb946e9f-19cd-4a9f-8585-88118c840922] Running
	I0612 20:31:56.744421   32635 system_pods.go:89] "kindnet-mthnq" [49950bb0-368d-4239-ae93-04c980a8b531] Running
	I0612 20:31:56.744427   32635 system_pods.go:89] "kube-apiserver-ha-844626" [0e8ba551-e997-453a-b76f-a090a441bce4] Running
	I0612 20:31:56.744436   32635 system_pods.go:89] "kube-apiserver-ha-844626-m02" [eeaf9c1b-e433-4de6-b6e8-4c33cd467a42] Running
	I0612 20:31:56.744445   32635 system_pods.go:89] "kube-apiserver-ha-844626-m03" [5f530a0a-cc60-4724-b3fa-4525884da5e8] Running
	I0612 20:31:56.744450   32635 system_pods.go:89] "kube-controller-manager-ha-844626" [9bca7a0a-74d1-4b9c-9915-2cf6a4eb5e52] Running
	I0612 20:31:56.744456   32635 system_pods.go:89] "kube-controller-manager-ha-844626-m02" [6e26986e-06e4-4e85-b83d-57c2254732f0] Running
	I0612 20:31:56.744461   32635 system_pods.go:89] "kube-controller-manager-ha-844626-m03" [0df52c5e-a186-4b14-a5d4-bb6d5190bac0] Running
	I0612 20:31:56.744468   32635 system_pods.go:89] "kube-proxy-2clg8" [9e4dd97c-794a-4f29-bc12-f7892e5fcfd4] Running
	I0612 20:31:56.744472   32635 system_pods.go:89] "kube-proxy-69ctp" [c66149e8-2a69-4f1f-9ddc-5e272204e6f5] Running
	I0612 20:31:56.744478   32635 system_pods.go:89] "kube-proxy-f7ct8" [4bf3e7e1-68e8-4d0d-980b-cb5055e10365] Running
	I0612 20:31:56.744482   32635 system_pods.go:89] "kube-scheduler-ha-844626" [49238394-1429-40ce-8d74-290b1743547f] Running
	I0612 20:31:56.744489   32635 system_pods.go:89] "kube-scheduler-ha-844626-m02" [488c0960-8abb-40d1-a92e-bd4f61b5973b] Running
	I0612 20:31:56.744493   32635 system_pods.go:89] "kube-scheduler-ha-844626-m03" [2ec2f277-0a72-4937-8591-28ca2822e98d] Running
	I0612 20:31:56.744499   32635 system_pods.go:89] "kube-vip-ha-844626" [654fd183-21b0-4df5-b557-ed676c5ecb71] Running
	I0612 20:31:56.744504   32635 system_pods.go:89] "kube-vip-ha-844626-m02" [c7785d9d-bfc0-4f65-b853-36a7f2ba791b] Running
	I0612 20:31:56.744510   32635 system_pods.go:89] "kube-vip-ha-844626-m03" [4207cddd-6eb3-40c6-be2c-ac895964aa0d] Running
	I0612 20:31:56.744519   32635 system_pods.go:89] "storage-provisioner" [d94c16d7-da82-41e3-82fe-83ed6e581f69] Running
	I0612 20:31:56.744529   32635 system_pods.go:126] duration metric: took 212.137812ms to wait for k8s-apps to be running ...
	I0612 20:31:56.744541   32635 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 20:31:56.744588   32635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:31:56.760798   32635 system_svc.go:56] duration metric: took 16.250874ms WaitForService to wait for kubelet
	I0612 20:31:56.760825   32635 kubeadm.go:576] duration metric: took 18.530299856s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:31:56.760848   32635 node_conditions.go:102] verifying NodePressure condition ...
	I0612 20:31:56.928296   32635 request.go:629] Waited for 167.369083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0612 20:31:56.928397   32635 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0612 20:31:56.928408   32635 round_trippers.go:469] Request Headers:
	I0612 20:31:56.928423   32635 round_trippers.go:473]     Accept: application/json, */*
	I0612 20:31:56.928432   32635 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0612 20:31:56.935911   32635 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 20:31:56.937610   32635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:31:56.937637   32635 node_conditions.go:123] node cpu capacity is 2
	I0612 20:31:56.937653   32635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:31:56.937660   32635 node_conditions.go:123] node cpu capacity is 2
	I0612 20:31:56.937665   32635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 20:31:56.937670   32635 node_conditions.go:123] node cpu capacity is 2
	I0612 20:31:56.937676   32635 node_conditions.go:105] duration metric: took 176.822246ms to run NodePressure ...
	I0612 20:31:56.937692   32635 start.go:240] waiting for startup goroutines ...
	I0612 20:31:56.937720   32635 start.go:254] writing updated cluster config ...
	I0612 20:31:56.938125   32635 ssh_runner.go:195] Run: rm -f paused
	I0612 20:31:56.991753   32635 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 20:31:56.993807   32635 out.go:177] * Done! kubectl is now configured to use "ha-844626" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.784438352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d09a2c8-7c12-462c-a474-c2cd008ea12e name=/runtime.v1.RuntimeService/Version
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.785596165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51dc88ce-6d0c-4eb1-b81f-ecbc9e446dbd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.786123058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718224590786095781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51dc88ce-6d0c-4eb1-b81f-ecbc9e446dbd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.786866416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7be1b25c-9af7-495d-97b2-894abee25444 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.786949086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7be1b25c-9af7-495d-97b2-894abee25444 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.787277564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224321149787168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119278046658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119207347720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a8f38c6abf70e91806516f6efb3aec847188dad6c91439ca9660d95029a3e6,PodSandboxId:f9dadbeb4bc2e8a16844613b21df3ec41cfde1ec2af14a253acf83cca3a30c77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718224119120797950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c30a5477508feea3fbb6cfdecd135d22a50b2e156bd4473175e26702f5c416d0,PodSandboxId:129f4ebc50a11b61c1dd83775ccaebc4b91dbea2042983198fd5117bfc252683,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718224117627449734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171822411
3786698401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd52024c12a2b486d52b8f6803360b3172fb54227b17758bbd09a2e22dc32163,PodSandboxId:b103684a1a841cc799e6cf1a92d9d837be2f300bbf7cc35bdb47f898a491a851,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17182240970
53063306,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a13e0b5fc3f27bb690c5d127326271,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224093469563326,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224093415472008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a,PodSandboxId:52f253395536d18114f5cc470daa0964b165f0d0ea899e8c3c61cd8cc9006f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224093393756393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf,PodSandboxId:4e98354eb40b14c0b715e4b40bf90e912f8896ef232ef8071df238b51fcc9a90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224093340732616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7be1b25c-9af7-495d-97b2-894abee25444 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.799997658Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=994378c4-b3b2-4508-a9a1-6e733c3868a6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.800329652Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-bdzsx,Uid:74f96190-8d97-478c-b01d-de61520289be,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718224318399469478,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:31:58.083744941Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-lxd6n,Uid:65d25d78-6fa7-4dc7-9cf2-e2fac796f194,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1718224119052154242,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:28:38.737567040Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqzvn,Uid:b22b3ba0-1a59-4066-9db5-380986d73dca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718224118952157086,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-06-12T20:28:38.632855885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f9dadbeb4bc2e8a16844613b21df3ec41cfde1ec2af14a253acf83cca3a30c77,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d94c16d7-da82-41e3-82fe-83ed6e581f69,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718224118936461280,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-12T20:28:38.623962154Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&PodSandboxMetadata{Name:kube-proxy-69ctp,Uid:c66149e8-2a69-4f1f-9ddc-5e272204e6f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718224113556458465,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-06-12T20:28:33.229353463Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:129f4ebc50a11b61c1dd83775ccaebc4b91dbea2042983198fd5117bfc252683,Metadata:&PodSandboxMetadata{Name:kindnet-mthnq,Uid:49950bb0-368d-4239-ae93-04c980a8b531,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718224113543607742,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:28:33.221392803Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&PodSandboxMetadata{Name:etcd-ha-844626,Uid:5eeb7c1880efee41beff2f38986d6a2f,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1718224093168164649,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.196:2379,kubernetes.io/config.hash: 5eeb7c1880efee41beff2f38986d6a2f,kubernetes.io/config.seen: 2024-06-12T20:28:12.683671153Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4e98354eb40b14c0b715e4b40bf90e912f8896ef232ef8071df238b51fcc9a90,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-844626,Uid:48a4dcb0404b2818e4d9a3c344a7e5d6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718224093165649595,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 48a4dcb0404b2818e4d9a3c344a7e5d6,kubernetes.io/config.seen: 2024-06-12T20:28:12.683678554Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:52f253395536d18114f5cc470daa0964b165f0d0ea899e8c3c61cd8cc9006f96,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-844626,Uid:5d96acdf137cf3b5a36cb1641ff47f87,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718224093164051577,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.196:8443,kubernetes.io/config.hash: 5d96acdf137cf3b5a36cb1641ff47f87,kubernetes.io/config.seen: 2024-06-12T20:28:12.683672711Z,kube
rnetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b103684a1a841cc799e6cf1a92d9d837be2f300bbf7cc35bdb47f898a491a851,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-844626,Uid:e3a13e0b5fc3f27bb690c5d127326271,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718224093152742201,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a13e0b5fc3f27bb690c5d127326271,},Annotations:map[string]string{kubernetes.io/config.hash: e3a13e0b5fc3f27bb690c5d127326271,kubernetes.io/config.seen: 2024-06-12T20:28:12.683660962Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-844626,Uid:f6a445b2a0c4cdfeb60569362c5f7933,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718224093144983457,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f6a445b2a0c4cdfeb60569362c5f7933,kubernetes.io/config.seen: 2024-06-12T20:28:12.683679762Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=994378c4-b3b2-4508-a9a1-6e733c3868a6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.801093378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea375e3f-6c0e-4c15-8c49-7aed6ca29127 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.801159707Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea375e3f-6c0e-4c15-8c49-7aed6ca29127 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.801678507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224321149787168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119278046658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119207347720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a8f38c6abf70e91806516f6efb3aec847188dad6c91439ca9660d95029a3e6,PodSandboxId:f9dadbeb4bc2e8a16844613b21df3ec41cfde1ec2af14a253acf83cca3a30c77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718224119120797950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c30a5477508feea3fbb6cfdecd135d22a50b2e156bd4473175e26702f5c416d0,PodSandboxId:129f4ebc50a11b61c1dd83775ccaebc4b91dbea2042983198fd5117bfc252683,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718224117627449734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171822411
3786698401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd52024c12a2b486d52b8f6803360b3172fb54227b17758bbd09a2e22dc32163,PodSandboxId:b103684a1a841cc799e6cf1a92d9d837be2f300bbf7cc35bdb47f898a491a851,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17182240970
53063306,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a13e0b5fc3f27bb690c5d127326271,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224093469563326,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224093415472008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a,PodSandboxId:52f253395536d18114f5cc470daa0964b165f0d0ea899e8c3c61cd8cc9006f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224093393756393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf,PodSandboxId:4e98354eb40b14c0b715e4b40bf90e912f8896ef232ef8071df238b51fcc9a90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224093340732616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea375e3f-6c0e-4c15-8c49-7aed6ca29127 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.831179839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5de7a99d-3b4a-44da-9565-2685e6ea36da name=/runtime.v1.RuntimeService/Version
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.831401999Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5de7a99d-3b4a-44da-9565-2685e6ea36da name=/runtime.v1.RuntimeService/Version
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.832973583Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74af9e80-5e26-4d29-81bf-b0f7dd08c5eb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.833536831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718224590833511343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74af9e80-5e26-4d29-81bf-b0f7dd08c5eb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.834251188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f457b63e-89f6-4787-a7ab-0950119efd26 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.834308297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f457b63e-89f6-4787-a7ab-0950119efd26 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.834542953Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224321149787168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119278046658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119207347720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a8f38c6abf70e91806516f6efb3aec847188dad6c91439ca9660d95029a3e6,PodSandboxId:f9dadbeb4bc2e8a16844613b21df3ec41cfde1ec2af14a253acf83cca3a30c77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718224119120797950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c30a5477508feea3fbb6cfdecd135d22a50b2e156bd4473175e26702f5c416d0,PodSandboxId:129f4ebc50a11b61c1dd83775ccaebc4b91dbea2042983198fd5117bfc252683,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718224117627449734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171822411
3786698401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd52024c12a2b486d52b8f6803360b3172fb54227b17758bbd09a2e22dc32163,PodSandboxId:b103684a1a841cc799e6cf1a92d9d837be2f300bbf7cc35bdb47f898a491a851,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17182240970
53063306,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a13e0b5fc3f27bb690c5d127326271,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224093469563326,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224093415472008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a,PodSandboxId:52f253395536d18114f5cc470daa0964b165f0d0ea899e8c3c61cd8cc9006f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224093393756393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf,PodSandboxId:4e98354eb40b14c0b715e4b40bf90e912f8896ef232ef8071df238b51fcc9a90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224093340732616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f457b63e-89f6-4787-a7ab-0950119efd26 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.876380458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d40c8805-4ff2-4d13-aebd-e9cfb9b92152 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.877097149Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d40c8805-4ff2-4d13-aebd-e9cfb9b92152 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.878824895Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6855f84a-b553-45d7-9e8a-392dcc14bd86 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.879477078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718224590879453513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6855f84a-b553-45d7-9e8a-392dcc14bd86 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.879902547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8098f95d-e93f-4920-8c20-e26e1b3f4034 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.879972929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8098f95d-e93f-4920-8c20-e26e1b3f4034 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:36:30 ha-844626 crio[683]: time="2024-06-12 20:36:30.880313182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224321149787168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119278046658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224119207347720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63a8f38c6abf70e91806516f6efb3aec847188dad6c91439ca9660d95029a3e6,PodSandboxId:f9dadbeb4bc2e8a16844613b21df3ec41cfde1ec2af14a253acf83cca3a30c77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1718224119120797950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c30a5477508feea3fbb6cfdecd135d22a50b2e156bd4473175e26702f5c416d0,PodSandboxId:129f4ebc50a11b61c1dd83775ccaebc4b91dbea2042983198fd5117bfc252683,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718224117627449734,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171822411
3786698401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd52024c12a2b486d52b8f6803360b3172fb54227b17758bbd09a2e22dc32163,PodSandboxId:b103684a1a841cc799e6cf1a92d9d837be2f300bbf7cc35bdb47f898a491a851,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17182240970
53063306,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a13e0b5fc3f27bb690c5d127326271,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224093469563326,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224093415472008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a,PodSandboxId:52f253395536d18114f5cc470daa0964b165f0d0ea899e8c3c61cd8cc9006f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224093393756393,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf,PodSandboxId:4e98354eb40b14c0b715e4b40bf90e912f8896ef232ef8071df238b51fcc9a90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224093340732616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8098f95d-e93f-4920-8c20-e26e1b3f4034 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ccf4b3ead47f7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   61e1e7d7b51fb       busybox-fc5497c4f-bdzsx
	5eb15a71cbeec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   43f0b5e0d015c       coredns-7db6d8ff4d-lxd6n
	6f896bc7211fd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   5dcd51ad312e1       coredns-7db6d8ff4d-bqzvn
	63a8f38c6abf7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   f9dadbeb4bc2e       storage-provisioner
	c30a5477508fe       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    7 minutes ago       Running             kindnet-cni               0                   129f4ebc50a11       kindnet-mthnq
	b028950fdf37b       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago       Running             kube-proxy                0                   4e233e0bc3bb7       kube-proxy-69ctp
	cd52024c12a2b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   b103684a1a841       kube-vip-ha-844626
	6255c7db8bcf2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   b0297d465b251       etcd-ha-844626
	223d45eb38f84       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      8 minutes ago       Running             kube-scheduler            0                   5512a35ec1cf1       kube-scheduler-ha-844626
	41bc9389144d3       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      8 minutes ago       Running             kube-apiserver            0                   52f253395536d       kube-apiserver-ha-844626
	1ac304305cc39       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      8 minutes ago       Running             kube-controller-manager   0                   4e98354eb40b1       kube-controller-manager-ha-844626
	
	
	==> coredns [5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0] <==
	[INFO] 10.244.1.2:48442 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001875956s
	[INFO] 10.244.1.2:48528 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000293437s
	[INFO] 10.244.1.2:41648 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125681s
	[INFO] 10.244.1.2:54972 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113166s
	[INFO] 10.244.1.2:41309 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085981s
	[INFO] 10.244.2.2:46088 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001813687s
	[INFO] 10.244.2.2:41288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099916s
	[INFO] 10.244.2.2:50111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001353864s
	[INFO] 10.244.2.2:58718 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071988s
	[INFO] 10.244.2.2:53104 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063402s
	[INFO] 10.244.2.2:33504 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000200272s
	[INFO] 10.244.0.4:57974 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068404s
	[INFO] 10.244.1.2:36180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000396478s
	[INFO] 10.244.1.2:44974 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143897s
	[INFO] 10.244.2.2:45916 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153283s
	[INFO] 10.244.2.2:54255 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107674s
	[INFO] 10.244.2.2:37490 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120001s
	[INFO] 10.244.2.2:35084 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008018s
	[INFO] 10.244.0.4:39477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000273278s
	[INFO] 10.244.1.2:48205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158614s
	[INFO] 10.244.1.2:59881 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158202s
	[INFO] 10.244.1.2:35567 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000472197s
	[INFO] 10.244.1.2:56490 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000211826s
	[INFO] 10.244.2.2:48246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156952s
	[INFO] 10.244.2.2:43466 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117313s
	
	
	==> coredns [6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff] <==
	[INFO] 10.244.0.4:35966 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.004427903s
	[INFO] 10.244.1.2:42207 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183564s
	[INFO] 10.244.2.2:40381 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000155821s
	[INFO] 10.244.2.2:38862 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000101136s
	[INFO] 10.244.2.2:44086 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001894727s
	[INFO] 10.244.0.4:56242 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009694s
	[INFO] 10.244.0.4:50224 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170892s
	[INFO] 10.244.0.4:50347 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139284s
	[INFO] 10.244.0.4:43967 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.022155051s
	[INFO] 10.244.0.4:34878 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000206851s
	[INFO] 10.244.1.2:46797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00034142s
	[INFO] 10.244.1.2:43369 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000248825s
	[INFO] 10.244.1.2:56650 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001632154s
	[INFO] 10.244.2.2:38141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172487s
	[INFO] 10.244.2.2:60906 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158767s
	[INFO] 10.244.0.4:40480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117274s
	[INFO] 10.244.0.4:47149 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000771s
	[INFO] 10.244.0.4:56834 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000323893s
	[INFO] 10.244.1.2:44664 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146272s
	[INFO] 10.244.1.2:47748 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110683s
	[INFO] 10.244.0.4:39510 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159779s
	[INFO] 10.244.0.4:49210 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125351s
	[INFO] 10.244.0.4:48326 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000179032s
	[INFO] 10.244.2.2:38296 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150584s
	[INFO] 10.244.2.2:58162 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116767s
	
	
	==> describe nodes <==
	Name:               ha-844626
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T20_28_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:36:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:32:24 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:32:24 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:32:24 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:32:24 +0000   Wed, 12 Jun 2024 20:28:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    ha-844626
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca8d79507bbc4f44bf947af92833058f
	  System UUID:                ca8d7950-7bbc-4f44-bf94-7af92833058f
	  Boot ID:                    da0f0a2a-5126-4bca-9f1f-744b30254ff4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bdzsx              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 coredns-7db6d8ff4d-bqzvn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m58s
	  kube-system                 coredns-7db6d8ff4d-lxd6n             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m58s
	  kube-system                 etcd-ha-844626                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m12s
	  kube-system                 kindnet-mthnq                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m58s
	  kube-system                 kube-apiserver-ha-844626             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	  kube-system                 kube-controller-manager-ha-844626    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	  kube-system                 kube-proxy-69ctp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kube-system                 kube-scheduler-ha-844626             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	  kube-system                 kube-vip-ha-844626                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m56s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m19s (x7 over 8m19s)  kubelet          Node ha-844626 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m19s (x8 over 8m19s)  kubelet          Node ha-844626 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m19s (x8 over 8m19s)  kubelet          Node ha-844626 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m12s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m12s                  kubelet          Node ha-844626 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m12s                  kubelet          Node ha-844626 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m12s                  kubelet          Node ha-844626 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m59s                  node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal  NodeReady                7m53s                  kubelet          Node ha-844626 status is now: NodeReady
	  Normal  RegisteredNode           5m49s                  node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal  RegisteredNode           4m39s                  node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	
	
	Name:               ha-844626-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T20_30_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:30:25 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:32:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 12 Jun 2024 20:32:27 +0000   Wed, 12 Jun 2024 20:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 12 Jun 2024 20:32:27 +0000   Wed, 12 Jun 2024 20:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 12 Jun 2024 20:32:27 +0000   Wed, 12 Jun 2024 20:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 12 Jun 2024 20:32:27 +0000   Wed, 12 Jun 2024 20:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    ha-844626-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc34ec9a17c449479c11e07f628f1a6e
	  System UUID:                fc34ec9a-17c4-4947-9c11-e07f628f1a6e
	  Boot ID:                    3b223b75-c640-40c2-9cb9-0319e4770144
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bh59q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 etcd-ha-844626-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m1s
	  kube-system                 kindnet-fz6bl                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m6s
	  kube-system                 kube-apiserver-ha-844626-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-controller-manager-ha-844626-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-f7ct8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-scheduler-ha-844626-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-vip-ha-844626-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  6m6s (x8 over 6m6s)  kubelet          Node ha-844626-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x8 over 6m6s)  kubelet          Node ha-844626-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x7 over 6m6s)  kubelet          Node ha-844626-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m4s                 node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           5m49s                node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           4m39s                node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  NodeNotReady             2m49s                node-controller  Node ha-844626-m02 status is now: NodeNotReady
	
	
	Name:               ha-844626-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T20_31_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:31:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:36:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:32:05 +0000   Wed, 12 Jun 2024 20:31:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:32:05 +0000   Wed, 12 Jun 2024 20:31:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:32:05 +0000   Wed, 12 Jun 2024 20:31:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:32:05 +0000   Wed, 12 Jun 2024 20:31:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-844626-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e6bf394d9ac40219e8a5de4a5d52b0f
	  System UUID:                1e6bf394-d9ac-4021-9e8a-5de4a5d52b0f
	  Boot ID:                    ef8801d4-4f53-4501-8d8f-1febd29ecc5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dhw8h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 etcd-ha-844626-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m55s
	  kube-system                 kindnet-8hdxz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m57s
	  kube-system                 kube-apiserver-ha-844626-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-controller-manager-ha-844626-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-proxy-2clg8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-scheduler-ha-844626-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-vip-ha-844626-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m57s (x8 over 4m57s)  kubelet          Node ha-844626-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s (x8 over 4m57s)  kubelet          Node ha-844626-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s (x7 over 4m57s)  kubelet          Node ha-844626-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	  Normal  RegisteredNode           4m39s                  node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	
	
	Name:               ha-844626-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T20_32_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:36:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:33:05 +0000   Wed, 12 Jun 2024 20:32:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:33:05 +0000   Wed, 12 Jun 2024 20:32:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:33:05 +0000   Wed, 12 Jun 2024 20:32:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:33:05 +0000   Wed, 12 Jun 2024 20:32:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    ha-844626-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 76e9ad048f36466a8cb780349dbd0fce
	  System UUID:                76e9ad04-8f36-466a-8cb7-80349dbd0fce
	  Boot ID:                    9b195a09-7c2c-4edb-aee8-31e13eaba894
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pwr4p       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m56s
	  kube-system                 kube-proxy-dbk2r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m56s (x3 over 3m57s)  kubelet          Node ha-844626-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x3 over 3m57s)  kubelet          Node ha-844626-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x3 over 3m57s)  kubelet          Node ha-844626-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal  NodeReady                3m46s                  kubelet          Node ha-844626-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun12 20:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051526] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040422] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.527122] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.467419] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.568206] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun12 20:28] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.063983] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073055] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.159207] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.152158] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.286482] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.221083] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.069110] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.063782] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.293152] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.089558] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.977157] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.420198] kauditd_printk_skb: 38 callbacks suppressed
	[Jun12 20:30] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d] <==
	{"level":"warn","ts":"2024-06-12T20:36:31.194603Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.207495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.211677Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.216118Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.22672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.24544Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.253428Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.257894Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.262036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.27713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.277639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.279476Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.284842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.293484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.293665Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.296153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.29989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.301659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.305369Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.31182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.31898Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.320379Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.108:2380/version","remote-member-id":"d248ce75fc8bdbf7","error":"Get \"https://192.168.39.108:2380/version\": dial tcp 192.168.39.108:2380: i/o timeout"}
	{"level":"warn","ts":"2024-06-12T20:36:31.320433Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d248ce75fc8bdbf7","error":"Get \"https://192.168.39.108:2380/version\": dial tcp 192.168.39.108:2380: i/o timeout"}
	{"level":"warn","ts":"2024-06-12T20:36:31.326706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:36:31.376519Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:36:31 up 8 min,  0 users,  load average: 0.75, 0.45, 0.22
	Linux ha-844626 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c30a5477508feea3fbb6cfdecd135d22a50b2e156bd4473175e26702f5c416d0] <==
	I0612 20:35:59.055141       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:36:09.067835       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:36:09.067879       1 main.go:227] handling current node
	I0612 20:36:09.067890       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:36:09.067895       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:36:09.068002       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0612 20:36:09.068008       1 main.go:250] Node ha-844626-m03 has CIDR [10.244.2.0/24] 
	I0612 20:36:09.068065       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:36:09.068070       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:36:19.083384       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:36:19.083433       1 main.go:227] handling current node
	I0612 20:36:19.083445       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:36:19.083449       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:36:19.083600       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0612 20:36:19.083649       1 main.go:250] Node ha-844626-m03 has CIDR [10.244.2.0/24] 
	I0612 20:36:19.083737       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:36:19.083763       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:36:29.098461       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:36:29.098593       1 main.go:227] handling current node
	I0612 20:36:29.098626       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:36:29.098645       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:36:29.098803       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0612 20:36:29.098839       1 main.go:250] Node ha-844626-m03 has CIDR [10.244.2.0/24] 
	I0612 20:36:29.098900       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:36:29.098917       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a] <==
	I0612 20:28:18.935723       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 20:28:19.678635       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 20:28:19.692646       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0612 20:28:19.822166       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 20:28:32.793517       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0612 20:28:33.193380       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0612 20:30:26.015026       1 wrap.go:54] timeout or abort while handling: method=POST URI="/api/v1/namespaces/kube-system/events" audit-ID="bd780f3c-7a4e-4ef7-b113-51a12949e669"
	E0612 20:30:26.015079       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 161.105µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0612 20:30:26.015100       1 timeout.go:142] post-timeout activity - time-elapsed: 2.376µs, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0612 20:32:02.452698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47728: use of closed network connection
	E0612 20:32:02.659766       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47748: use of closed network connection
	E0612 20:32:02.846816       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47764: use of closed network connection
	E0612 20:32:03.062574       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47782: use of closed network connection
	E0612 20:32:03.248844       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47808: use of closed network connection
	E0612 20:32:03.438792       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47836: use of closed network connection
	E0612 20:32:03.612771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47848: use of closed network connection
	E0612 20:32:03.798577       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47862: use of closed network connection
	E0612 20:32:03.969493       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47866: use of closed network connection
	E0612 20:32:04.284697       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47888: use of closed network connection
	E0612 20:32:04.469044       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47914: use of closed network connection
	E0612 20:32:04.667569       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47928: use of closed network connection
	E0612 20:32:04.868188       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47948: use of closed network connection
	E0612 20:32:05.057257       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47962: use of closed network connection
	E0612 20:32:05.237804       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47968: use of closed network connection
	W0612 20:33:18.823492       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.196 192.168.39.76]
	
	
	==> kube-controller-manager [1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf] <==
	I0612 20:31:58.294558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.015879ms"
	I0612 20:31:58.341531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.911887ms"
	I0612 20:31:58.341859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.962µs"
	I0612 20:31:58.480463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.565299ms"
	I0612 20:31:58.480603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.865µs"
	I0612 20:31:59.760811       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.899µs"
	I0612 20:31:59.772055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.52µs"
	I0612 20:31:59.777140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.055µs"
	I0612 20:31:59.796890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.149µs"
	I0612 20:31:59.807477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.919µs"
	I0612 20:31:59.816716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="300.243µs"
	I0612 20:32:01.517901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.269598ms"
	I0612 20:32:01.517997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.857µs"
	I0612 20:32:01.771751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.45305ms"
	I0612 20:32:01.771971       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.672µs"
	I0612 20:32:02.014075       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.476415ms"
	I0612 20:32:02.014298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.137µs"
	E0612 20:32:34.920751       1 certificate_controller.go:146] Sync csr-jhkfg failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-jhkfg": the object has been modified; please apply your changes to the latest version and try again
	I0612 20:32:35.208325       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-844626-m04\" does not exist"
	I0612 20:32:35.227686       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-844626-m04" podCIDRs=["10.244.3.0/24"]
	I0612 20:32:37.305745       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-844626-m04"
	I0612 20:32:45.641684       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844626-m04"
	I0612 20:33:42.350025       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844626-m04"
	I0612 20:33:42.508379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.703911ms"
	I0612 20:33:42.508535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.547µs"
	
	
	==> kube-proxy [b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c] <==
	I0612 20:28:34.147183       1 server_linux.go:69] "Using iptables proxy"
	I0612 20:28:34.165061       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.196"]
	I0612 20:28:34.245342       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:28:34.245407       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:28:34.245424       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:28:34.255837       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:28:34.256333       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:28:34.256391       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:28:34.257947       1 config.go:192] "Starting service config controller"
	I0612 20:28:34.258011       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:28:34.258065       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:28:34.258085       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:28:34.260520       1 config.go:319] "Starting node config controller"
	I0612 20:28:34.261519       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:28:34.358924       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 20:28:34.359015       1 shared_informer.go:320] Caches are synced for service config
	I0612 20:28:34.361763       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92] <==
	W0612 20:28:18.299945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0612 20:28:18.299998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0612 20:28:18.312918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:28:18.312948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0612 20:28:18.314410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0612 20:28:18.314482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0612 20:28:18.342701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0612 20:28:18.342749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0612 20:28:18.433677       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 20:28:18.433733       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 20:28:21.051023       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0612 20:32:35.318772       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pwr4p\": pod kindnet-pwr4p is already assigned to node \"ha-844626-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pwr4p" node="ha-844626-m04"
	E0612 20:32:35.318997       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9757a4d6-0eb4-4893-8673-17fbeb293219(kube-system/kindnet-pwr4p) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pwr4p"
	E0612 20:32:35.319032       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pwr4p\": pod kindnet-pwr4p is already assigned to node \"ha-844626-m04\"" pod="kube-system/kindnet-pwr4p"
	I0612 20:32:35.319080       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pwr4p" node="ha-844626-m04"
	E0612 20:32:35.330850       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dbk2r\": pod kube-proxy-dbk2r is already assigned to node \"ha-844626-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dbk2r" node="ha-844626-m04"
	E0612 20:32:35.330959       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3de040c5-ed32-45b2-94d6-b89ca999a410(kube-system/kube-proxy-dbk2r) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dbk2r"
	E0612 20:32:35.331033       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dbk2r\": pod kube-proxy-dbk2r is already assigned to node \"ha-844626-m04\"" pod="kube-system/kube-proxy-dbk2r"
	I0612 20:32:35.331056       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dbk2r" node="ha-844626-m04"
	E0612 20:32:35.356582       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hnnqg\": pod kube-proxy-hnnqg is already assigned to node \"ha-844626-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hnnqg" node="ha-844626-m04"
	E0612 20:32:35.356735       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hnnqg\": pod kube-proxy-hnnqg is already assigned to node \"ha-844626-m04\"" pod="kube-system/kube-proxy-hnnqg"
	E0612 20:32:35.367332       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-45rls\": pod kindnet-45rls is already assigned to node \"ha-844626-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-45rls" node="ha-844626-m04"
	E0612 20:32:35.367412       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b1aac0cb-9a25-43e6-88e9-99b045417097(kube-system/kindnet-45rls) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-45rls"
	E0612 20:32:35.367432       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-45rls\": pod kindnet-45rls is already assigned to node \"ha-844626-m04\"" pod="kube-system/kindnet-45rls"
	I0612 20:32:35.367452       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-45rls" node="ha-844626-m04"
	
	
	==> kubelet <==
	Jun 12 20:32:19 ha-844626 kubelet[1371]: E0612 20:32:19.808314    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:32:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:32:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:32:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:32:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:33:19 ha-844626 kubelet[1371]: E0612 20:33:19.806297    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:33:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:33:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:33:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:33:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:34:19 ha-844626 kubelet[1371]: E0612 20:34:19.807083    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:34:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:34:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:34:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:34:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:35:19 ha-844626 kubelet[1371]: E0612 20:35:19.807301    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:35:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:35:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:35:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:35:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:36:19 ha-844626 kubelet[1371]: E0612 20:36:19.806404    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:36:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:36:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:36:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:36:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-844626 -n ha-844626
helpers_test.go:261: (dbg) Run:  kubectl --context ha-844626 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (60.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (400.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-844626 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-844626 -v=7 --alsologtostderr
E0612 20:36:48.613877   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:37:16.295536   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-844626 -v=7 --alsologtostderr: exit status 82 (2m1.881051231s)

                                                
                                                
-- stdout --
	* Stopping node "ha-844626-m04"  ...
	* Stopping node "ha-844626-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:36:32.833944   38651 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:36:32.834788   38651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:36:32.834803   38651 out.go:304] Setting ErrFile to fd 2...
	I0612 20:36:32.834809   38651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:36:32.835435   38651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:36:32.836010   38651 out.go:298] Setting JSON to false
	I0612 20:36:32.836165   38651 mustload.go:65] Loading cluster: ha-844626
	I0612 20:36:32.836585   38651 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:36:32.836667   38651 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:36:32.836874   38651 mustload.go:65] Loading cluster: ha-844626
	I0612 20:36:32.837066   38651 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:36:32.837121   38651 stop.go:39] StopHost: ha-844626-m04
	I0612 20:36:32.837491   38651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:32.837548   38651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:32.853047   38651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45997
	I0612 20:36:32.853594   38651 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:32.854173   38651 main.go:141] libmachine: Using API Version  1
	I0612 20:36:32.854194   38651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:32.854604   38651 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:32.857129   38651 out.go:177] * Stopping node "ha-844626-m04"  ...
	I0612 20:36:32.858418   38651 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0612 20:36:32.858444   38651 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:36:32.858772   38651 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0612 20:36:32.858799   38651 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:36:32.861863   38651 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:32.862375   38651 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:32:20 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:36:32.862409   38651 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:36:32.862504   38651 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:36:32.862720   38651 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:36:32.862873   38651 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:36:32.863031   38651 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	I0612 20:36:32.955304   38651 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0612 20:36:33.011105   38651 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0612 20:36:33.065252   38651 main.go:141] libmachine: Stopping "ha-844626-m04"...
	I0612 20:36:33.065287   38651 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:36:33.066746   38651 main.go:141] libmachine: (ha-844626-m04) Calling .Stop
	I0612 20:36:33.070422   38651 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 0/120
	I0612 20:36:34.239420   38651 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:36:34.240637   38651 main.go:141] libmachine: Machine "ha-844626-m04" was stopped.
	I0612 20:36:34.240655   38651 stop.go:75] duration metric: took 1.382239073s to stop
	I0612 20:36:34.240671   38651 stop.go:39] StopHost: ha-844626-m03
	I0612 20:36:34.240950   38651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:36:34.240986   38651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:36:34.256840   38651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35103
	I0612 20:36:34.257188   38651 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:36:34.257600   38651 main.go:141] libmachine: Using API Version  1
	I0612 20:36:34.257621   38651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:36:34.257877   38651 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:36:34.260053   38651 out.go:177] * Stopping node "ha-844626-m03"  ...
	I0612 20:36:34.261297   38651 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0612 20:36:34.261320   38651 main.go:141] libmachine: (ha-844626-m03) Calling .DriverName
	I0612 20:36:34.261550   38651 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0612 20:36:34.261579   38651 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHHostname
	I0612 20:36:34.264692   38651 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:34.265141   38651 main.go:141] libmachine: (ha-844626-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:de:69", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:31:02 +0000 UTC Type:0 Mac:52:54:00:81:de:69 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-844626-m03 Clientid:01:52:54:00:81:de:69}
	I0612 20:36:34.265170   38651 main.go:141] libmachine: (ha-844626-m03) DBG | domain ha-844626-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:81:de:69 in network mk-ha-844626
	I0612 20:36:34.265321   38651 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHPort
	I0612 20:36:34.265516   38651 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHKeyPath
	I0612 20:36:34.265671   38651 main.go:141] libmachine: (ha-844626-m03) Calling .GetSSHUsername
	I0612 20:36:34.265801   38651 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m03/id_rsa Username:docker}
	I0612 20:36:34.353846   38651 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0612 20:36:34.410042   38651 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0612 20:36:34.465320   38651 main.go:141] libmachine: Stopping "ha-844626-m03"...
	I0612 20:36:34.465354   38651 main.go:141] libmachine: (ha-844626-m03) Calling .GetState
	I0612 20:36:34.467062   38651 main.go:141] libmachine: (ha-844626-m03) Calling .Stop
	I0612 20:36:34.471349   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 0/120
	I0612 20:36:35.472978   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 1/120
	I0612 20:36:36.474545   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 2/120
	I0612 20:36:37.475964   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 3/120
	I0612 20:36:38.477556   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 4/120
	I0612 20:36:39.479379   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 5/120
	I0612 20:36:40.480852   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 6/120
	I0612 20:36:41.482787   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 7/120
	I0612 20:36:42.484369   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 8/120
	I0612 20:36:43.486046   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 9/120
	I0612 20:36:44.488270   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 10/120
	I0612 20:36:45.489857   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 11/120
	I0612 20:36:46.491306   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 12/120
	I0612 20:36:47.492769   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 13/120
	I0612 20:36:48.494372   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 14/120
	I0612 20:36:49.496106   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 15/120
	I0612 20:36:50.497750   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 16/120
	I0612 20:36:51.498901   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 17/120
	I0612 20:36:52.500547   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 18/120
	I0612 20:36:53.502121   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 19/120
	I0612 20:36:54.503417   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 20/120
	I0612 20:36:55.504853   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 21/120
	I0612 20:36:56.506652   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 22/120
	I0612 20:36:57.508286   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 23/120
	I0612 20:36:58.509701   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 24/120
	I0612 20:36:59.511702   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 25/120
	I0612 20:37:00.513114   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 26/120
	I0612 20:37:01.514502   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 27/120
	I0612 20:37:02.516777   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 28/120
	I0612 20:37:03.518363   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 29/120
	I0612 20:37:04.520138   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 30/120
	I0612 20:37:05.521826   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 31/120
	I0612 20:37:06.523184   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 32/120
	I0612 20:37:07.524561   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 33/120
	I0612 20:37:08.525767   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 34/120
	I0612 20:37:09.527673   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 35/120
	I0612 20:37:10.529040   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 36/120
	I0612 20:37:11.530380   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 37/120
	I0612 20:37:12.531814   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 38/120
	I0612 20:37:13.533070   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 39/120
	I0612 20:37:14.535024   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 40/120
	I0612 20:37:15.536606   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 41/120
	I0612 20:37:16.537961   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 42/120
	I0612 20:37:17.539232   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 43/120
	I0612 20:37:18.540740   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 44/120
	I0612 20:37:19.542422   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 45/120
	I0612 20:37:20.543823   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 46/120
	I0612 20:37:21.545617   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 47/120
	I0612 20:37:22.546941   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 48/120
	I0612 20:37:23.548568   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 49/120
	I0612 20:37:24.550315   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 50/120
	I0612 20:37:25.552250   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 51/120
	I0612 20:37:26.553731   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 52/120
	I0612 20:37:27.555372   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 53/120
	I0612 20:37:28.556682   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 54/120
	I0612 20:37:29.558601   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 55/120
	I0612 20:37:30.559962   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 56/120
	I0612 20:37:31.561091   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 57/120
	I0612 20:37:32.562717   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 58/120
	I0612 20:37:33.563789   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 59/120
	I0612 20:37:34.565432   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 60/120
	I0612 20:37:35.566744   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 61/120
	I0612 20:37:36.567997   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 62/120
	I0612 20:37:37.569345   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 63/120
	I0612 20:37:38.570589   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 64/120
	I0612 20:37:39.572483   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 65/120
	I0612 20:37:40.573842   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 66/120
	I0612 20:37:41.575397   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 67/120
	I0612 20:37:42.576892   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 68/120
	I0612 20:37:43.578191   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 69/120
	I0612 20:37:44.580020   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 70/120
	I0612 20:37:45.581450   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 71/120
	I0612 20:37:46.582906   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 72/120
	I0612 20:37:47.584760   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 73/120
	I0612 20:37:48.587211   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 74/120
	I0612 20:37:49.589064   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 75/120
	I0612 20:37:50.590191   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 76/120
	I0612 20:37:51.591821   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 77/120
	I0612 20:37:52.594118   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 78/120
	I0612 20:37:53.595612   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 79/120
	I0612 20:37:54.597863   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 80/120
	I0612 20:37:55.599093   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 81/120
	I0612 20:37:56.600807   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 82/120
	I0612 20:37:57.601963   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 83/120
	I0612 20:37:58.603445   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 84/120
	I0612 20:37:59.605180   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 85/120
	I0612 20:38:00.606742   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 86/120
	I0612 20:38:01.608653   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 87/120
	I0612 20:38:02.609919   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 88/120
	I0612 20:38:03.611367   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 89/120
	I0612 20:38:04.613154   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 90/120
	I0612 20:38:05.614462   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 91/120
	I0612 20:38:06.615916   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 92/120
	I0612 20:38:07.617698   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 93/120
	I0612 20:38:08.618943   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 94/120
	I0612 20:38:09.620627   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 95/120
	I0612 20:38:10.621906   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 96/120
	I0612 20:38:11.623364   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 97/120
	I0612 20:38:12.625994   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 98/120
	I0612 20:38:13.627339   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 99/120
	I0612 20:38:14.628950   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 100/120
	I0612 20:38:15.630332   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 101/120
	I0612 20:38:16.631732   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 102/120
	I0612 20:38:17.633146   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 103/120
	I0612 20:38:18.634408   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 104/120
	I0612 20:38:19.636054   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 105/120
	I0612 20:38:20.637403   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 106/120
	I0612 20:38:21.638820   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 107/120
	I0612 20:38:22.640899   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 108/120
	I0612 20:38:23.643324   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 109/120
	I0612 20:38:24.645050   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 110/120
	I0612 20:38:25.646455   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 111/120
	I0612 20:38:26.647752   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 112/120
	I0612 20:38:27.649852   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 113/120
	I0612 20:38:28.651302   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 114/120
	I0612 20:38:29.653341   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 115/120
	I0612 20:38:30.654825   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 116/120
	I0612 20:38:31.656338   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 117/120
	I0612 20:38:32.657830   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 118/120
	I0612 20:38:33.659188   38651 main.go:141] libmachine: (ha-844626-m03) Waiting for machine to stop 119/120
	I0612 20:38:34.660100   38651 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0612 20:38:34.660144   38651 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0612 20:38:34.662198   38651 out.go:177] 
	W0612 20:38:34.663767   38651 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0612 20:38:34.663786   38651 out.go:239] * 
	* 
	W0612 20:38:34.665998   38651 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 20:38:34.667800   38651 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-844626 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-844626 --wait=true -v=7 --alsologtostderr
E0612 20:39:56.707348   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:41:19.750832   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:41:48.613370   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-844626 --wait=true -v=7 --alsologtostderr: (4m36.140204262s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-844626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-844626 -n ha-844626
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-844626 logs -n 25: (1.970884234s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m02:/home/docker/cp-test_ha-844626-m03_ha-844626-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m02 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m03_ha-844626-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04:/home/docker/cp-test_ha-844626-m03_ha-844626-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m04 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m03_ha-844626-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp testdata/cp-test.txt                                              | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile43944605/001/cp-test_ha-844626-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626:/home/docker/cp-test_ha-844626-m04_ha-844626.txt                     |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626 sudo cat                                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626.txt                               |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m02:/home/docker/cp-test_ha-844626-m04_ha-844626-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m02 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03:/home/docker/cp-test_ha-844626-m04_ha-844626-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m03 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-844626 node stop m02 -v=7                                                   | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-844626 node start m02 -v=7                                                  | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:35 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-844626 -v=7                                                         | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:36 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-844626 -v=7                                                              | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:36 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-844626 --wait=true -v=7                                                  | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:38 UTC | 12 Jun 24 20:43 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-844626                                                              | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:43 UTC |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 20:38:34
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 20:38:34.712104   39149 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:38:34.712344   39149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:38:34.712353   39149 out.go:304] Setting ErrFile to fd 2...
	I0612 20:38:34.712357   39149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:38:34.712524   39149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:38:34.713045   39149 out.go:298] Setting JSON to false
	I0612 20:38:34.713924   39149 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4860,"bootTime":1718219855,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 20:38:34.713977   39149 start.go:139] virtualization: kvm guest
	I0612 20:38:34.716453   39149 out.go:177] * [ha-844626] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 20:38:34.717841   39149 notify.go:220] Checking for updates...
	I0612 20:38:34.717859   39149 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 20:38:34.719200   39149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 20:38:34.720778   39149 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:38:34.722230   39149 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:38:34.723834   39149 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 20:38:34.725279   39149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 20:38:34.727156   39149 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:38:34.727311   39149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 20:38:34.727708   39149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:38:34.727780   39149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:38:34.743421   39149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I0612 20:38:34.743848   39149 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:38:34.744442   39149 main.go:141] libmachine: Using API Version  1
	I0612 20:38:34.744461   39149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:38:34.744891   39149 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:38:34.745069   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:38:34.779647   39149 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 20:38:34.781007   39149 start.go:297] selected driver: kvm2
	I0612 20:38:34.781022   39149 start.go:901] validating driver "kvm2" against &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:38:34.781195   39149 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 20:38:34.781556   39149 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:38:34.781663   39149 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 20:38:34.797759   39149 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 20:38:34.798429   39149 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:38:34.798512   39149 cni.go:84] Creating CNI manager for ""
	I0612 20:38:34.798527   39149 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0612 20:38:34.798584   39149 start.go:340] cluster config:
	{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:38:34.798719   39149 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:38:34.801453   39149 out.go:177] * Starting "ha-844626" primary control-plane node in "ha-844626" cluster
	I0612 20:38:34.802928   39149 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:38:34.802969   39149 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 20:38:34.802982   39149 cache.go:56] Caching tarball of preloaded images
	I0612 20:38:34.803059   39149 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 20:38:34.803071   39149 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 20:38:34.803229   39149 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:38:34.803444   39149 start.go:360] acquireMachinesLock for ha-844626: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 20:38:34.803501   39149 start.go:364] duration metric: took 38.081µs to acquireMachinesLock for "ha-844626"
	I0612 20:38:34.803521   39149 start.go:96] Skipping create...Using existing machine configuration
	I0612 20:38:34.803529   39149 fix.go:54] fixHost starting: 
	I0612 20:38:34.803782   39149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:38:34.803823   39149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:38:34.818620   39149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0612 20:38:34.819029   39149 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:38:34.819536   39149 main.go:141] libmachine: Using API Version  1
	I0612 20:38:34.819563   39149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:38:34.819898   39149 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:38:34.820069   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:38:34.820366   39149 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:38:34.821942   39149 fix.go:112] recreateIfNeeded on ha-844626: state=Running err=<nil>
	W0612 20:38:34.821968   39149 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 20:38:34.823921   39149 out.go:177] * Updating the running kvm2 "ha-844626" VM ...
	I0612 20:38:34.825230   39149 machine.go:94] provisionDockerMachine start ...
	I0612 20:38:34.825260   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:38:34.825475   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:34.828139   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:34.828643   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:34.828672   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:34.828809   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:38:34.829000   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:34.829176   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:34.829330   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:38:34.829503   39149 main.go:141] libmachine: Using SSH client type: native
	I0612 20:38:34.829769   39149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:38:34.829793   39149 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 20:38:34.941467   39149 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844626
	
	I0612 20:38:34.941494   39149 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:38:34.941748   39149 buildroot.go:166] provisioning hostname "ha-844626"
	I0612 20:38:34.941769   39149 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:38:34.941970   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:34.944831   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:34.945339   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:34.945370   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:34.945481   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:38:34.945664   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:34.945894   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:34.946067   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:38:34.946292   39149 main.go:141] libmachine: Using SSH client type: native
	I0612 20:38:34.946466   39149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:38:34.946479   39149 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844626 && echo "ha-844626" | sudo tee /etc/hostname
	I0612 20:38:35.067005   39149 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844626
	
	I0612 20:38:35.067047   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:35.069794   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.070169   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:35.070198   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.070408   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:38:35.070592   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:35.070731   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:35.070866   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:38:35.070992   39149 main.go:141] libmachine: Using SSH client type: native
	I0612 20:38:35.071153   39149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:38:35.071181   39149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844626' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844626/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844626' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 20:38:35.176503   39149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:38:35.176534   39149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 20:38:35.176568   39149 buildroot.go:174] setting up certificates
	I0612 20:38:35.176576   39149 provision.go:84] configureAuth start
	I0612 20:38:35.176589   39149 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:38:35.176858   39149 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:38:35.179417   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.179766   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:35.179812   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.179930   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:35.182214   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.182601   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:35.182629   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.182743   39149 provision.go:143] copyHostCerts
	I0612 20:38:35.182781   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:38:35.182831   39149 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 20:38:35.182842   39149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:38:35.182918   39149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 20:38:35.183014   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:38:35.183034   39149 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 20:38:35.183040   39149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:38:35.183083   39149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 20:38:35.183141   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:38:35.183165   39149 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 20:38:35.183184   39149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:38:35.183217   39149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 20:38:35.183285   39149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.ha-844626 san=[127.0.0.1 192.168.39.196 ha-844626 localhost minikube]
	I0612 20:38:35.387144   39149 provision.go:177] copyRemoteCerts
	I0612 20:38:35.387229   39149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 20:38:35.387259   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:35.390019   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.390350   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:35.390379   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.390540   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:38:35.390754   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:35.390917   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:38:35.391065   39149 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:38:35.474126   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0612 20:38:35.474210   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 20:38:35.500491   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0612 20:38:35.500556   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0612 20:38:35.526624   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0612 20:38:35.526685   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 20:38:35.551890   39149 provision.go:87] duration metric: took 375.30296ms to configureAuth
	I0612 20:38:35.551915   39149 buildroot.go:189] setting minikube options for container-runtime
	I0612 20:38:35.552138   39149 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:38:35.552218   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:35.555096   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.555541   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:35.555567   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.555820   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:38:35.556035   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:35.556273   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:35.556468   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:38:35.556672   39149 main.go:141] libmachine: Using SSH client type: native
	I0612 20:38:35.556878   39149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:38:35.556913   39149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 20:40:06.412994   39149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 20:40:06.413021   39149 machine.go:97] duration metric: took 1m31.587775076s to provisionDockerMachine
	I0612 20:40:06.413037   39149 start.go:293] postStartSetup for "ha-844626" (driver="kvm2")
	I0612 20:40:06.413051   39149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 20:40:06.413070   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.413389   39149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 20:40:06.413419   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:40:06.416258   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.416626   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.416651   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.416811   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:40:06.417002   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.417177   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:40:06.417315   39149 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:40:06.500013   39149 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 20:40:06.504268   39149 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 20:40:06.504291   39149 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 20:40:06.504368   39149 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 20:40:06.504456   39149 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 20:40:06.504467   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /etc/ssl/certs/214442.pem
	I0612 20:40:06.504563   39149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 20:40:06.515196   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:40:06.538492   39149 start.go:296] duration metric: took 125.440977ms for postStartSetup
	I0612 20:40:06.538536   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.538824   39149 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0612 20:40:06.538847   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:40:06.541351   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.541699   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.541719   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.541885   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:40:06.542075   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.542232   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:40:06.542347   39149 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	W0612 20:40:06.622031   39149 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0612 20:40:06.622052   39149 fix.go:56] duration metric: took 1m31.818525074s for fixHost
	I0612 20:40:06.622073   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:40:06.624588   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.625027   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.625085   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.625204   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:40:06.625396   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.625593   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.625740   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:40:06.625902   39149 main.go:141] libmachine: Using SSH client type: native
	I0612 20:40:06.626052   39149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:40:06.626061   39149 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 20:40:06.728179   39149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224806.687531994
	
	I0612 20:40:06.728236   39149 fix.go:216] guest clock: 1718224806.687531994
	I0612 20:40:06.728263   39149 fix.go:229] Guest: 2024-06-12 20:40:06.687531994 +0000 UTC Remote: 2024-06-12 20:40:06.622059013 +0000 UTC m=+91.943977263 (delta=65.472981ms)
	I0612 20:40:06.728301   39149 fix.go:200] guest clock delta is within tolerance: 65.472981ms
	I0612 20:40:06.728309   39149 start.go:83] releasing machines lock for "ha-844626", held for 1m31.924796123s
	I0612 20:40:06.728340   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.728629   39149 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:40:06.731166   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.731572   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.731600   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.731723   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.732245   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.732425   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.732497   39149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 20:40:06.732537   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:40:06.732577   39149 ssh_runner.go:195] Run: cat /version.json
	I0612 20:40:06.732594   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:40:06.735020   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.735211   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.735463   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.735483   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.735622   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:40:06.735656   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.735678   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.735833   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:40:06.735842   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.736005   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.736006   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:40:06.736184   39149 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:40:06.736198   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:40:06.736321   39149 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:40:06.812762   39149 ssh_runner.go:195] Run: systemctl --version
	I0612 20:40:06.838816   39149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 20:40:07.005557   39149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 20:40:07.014768   39149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 20:40:07.014844   39149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 20:40:07.024257   39149 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0612 20:40:07.024274   39149 start.go:494] detecting cgroup driver to use...
	I0612 20:40:07.024341   39149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 20:40:07.040276   39149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 20:40:07.054168   39149 docker.go:217] disabling cri-docker service (if available) ...
	I0612 20:40:07.054217   39149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 20:40:07.068300   39149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 20:40:07.082569   39149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 20:40:07.234949   39149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 20:40:07.384601   39149 docker.go:233] disabling docker service ...
	I0612 20:40:07.384665   39149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 20:40:07.402285   39149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 20:40:07.415788   39149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 20:40:07.558521   39149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 20:40:07.707378   39149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 20:40:07.721917   39149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 20:40:07.741663   39149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 20:40:07.741728   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.752331   39149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 20:40:07.752400   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.762707   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.773279   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.784122   39149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 20:40:07.795203   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.806138   39149 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.817835   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.828272   39149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 20:40:07.838058   39149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 20:40:07.847424   39149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:40:07.996459   39149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 20:40:10.990562   39149 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.994066026s)
	I0612 20:40:10.990593   39149 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 20:40:10.990635   39149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 20:40:10.995880   39149 start.go:562] Will wait 60s for crictl version
	I0612 20:40:10.995923   39149 ssh_runner.go:195] Run: which crictl
	I0612 20:40:10.999815   39149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 20:40:11.041550   39149 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 20:40:11.041627   39149 ssh_runner.go:195] Run: crio --version
	I0612 20:40:11.070257   39149 ssh_runner.go:195] Run: crio --version
	I0612 20:40:11.101721   39149 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 20:40:11.103275   39149 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:40:11.105818   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:11.106174   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:11.106202   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:11.106439   39149 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 20:40:11.111490   39149 kubeadm.go:877] updating cluster {Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 20:40:11.111618   39149 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:40:11.111660   39149 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:40:11.157660   39149 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 20:40:11.157682   39149 crio.go:433] Images already preloaded, skipping extraction
	I0612 20:40:11.157732   39149 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:40:11.196348   39149 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 20:40:11.196375   39149 cache_images.go:84] Images are preloaded, skipping loading
	I0612 20:40:11.196387   39149 kubeadm.go:928] updating node { 192.168.39.196 8443 v1.30.1 crio true true} ...
	I0612 20:40:11.196490   39149 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844626 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 20:40:11.196551   39149 ssh_runner.go:195] Run: crio config
	I0612 20:40:11.245223   39149 cni.go:84] Creating CNI manager for ""
	I0612 20:40:11.245239   39149 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0612 20:40:11.245248   39149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 20:40:11.245279   39149 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-844626 NodeName:ha-844626 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 20:40:11.245442   39149 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-844626"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 20:40:11.245467   39149 kube-vip.go:115] generating kube-vip config ...
	I0612 20:40:11.245514   39149 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 20:40:11.257847   39149 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 20:40:11.257946   39149 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0612 20:40:11.258008   39149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 20:40:11.268067   39149 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 20:40:11.268138   39149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0612 20:40:11.277887   39149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0612 20:40:11.295688   39149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 20:40:11.312882   39149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0612 20:40:11.329895   39149 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0612 20:40:11.348600   39149 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0612 20:40:11.359564   39149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:40:11.508615   39149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:40:11.524444   39149 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626 for IP: 192.168.39.196
	I0612 20:40:11.524466   39149 certs.go:194] generating shared ca certs ...
	I0612 20:40:11.524482   39149 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:40:11.524636   39149 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 20:40:11.524686   39149 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 20:40:11.524700   39149 certs.go:256] generating profile certs ...
	I0612 20:40:11.524803   39149 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key
	I0612 20:40:11.524837   39149 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.dc56d1b6
	I0612 20:40:11.524857   39149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.dc56d1b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.108 192.168.39.76 192.168.39.254]
	I0612 20:40:12.014863   39149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.dc56d1b6 ...
	I0612 20:40:12.014898   39149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.dc56d1b6: {Name:mkea74692ba818d459bfe24cc809837ba8cc37aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:40:12.015115   39149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.dc56d1b6 ...
	I0612 20:40:12.015133   39149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.dc56d1b6: {Name:mkfa2aef60fd21dd1b6b30767207e755ac62c104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:40:12.015254   39149 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.dc56d1b6 -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt
	I0612 20:40:12.015464   39149 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.dc56d1b6 -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key
	I0612 20:40:12.015658   39149 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key
	I0612 20:40:12.015678   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 20:40:12.015709   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0612 20:40:12.015732   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 20:40:12.015756   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 20:40:12.015775   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 20:40:12.015796   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 20:40:12.015818   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 20:40:12.015840   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 20:40:12.015907   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 20:40:12.015951   39149 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 20:40:12.015966   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 20:40:12.016014   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 20:40:12.016053   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 20:40:12.016088   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 20:40:12.016150   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:40:12.016194   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /usr/share/ca-certificates/214442.pem
	I0612 20:40:12.016216   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:40:12.016236   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem -> /usr/share/ca-certificates/21444.pem
	I0612 20:40:12.016824   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 20:40:12.042998   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 20:40:12.066657   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 20:40:12.090297   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 20:40:12.114771   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0612 20:40:12.139044   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 20:40:12.163809   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 20:40:12.188358   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 20:40:12.213228   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 20:40:12.238051   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 20:40:12.262216   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 20:40:12.285543   39149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 20:40:12.302790   39149 ssh_runner.go:195] Run: openssl version
	I0612 20:40:12.309200   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 20:40:12.319681   39149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 20:40:12.324276   39149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 20:40:12.324333   39149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 20:40:12.330069   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 20:40:12.340476   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 20:40:12.351384   39149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:40:12.355908   39149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:40:12.355969   39149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:40:12.361544   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 20:40:12.370454   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 20:40:12.380966   39149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 20:40:12.385518   39149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 20:40:12.385579   39149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 20:40:12.391440   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 20:40:12.400649   39149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 20:40:12.405772   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 20:40:12.411608   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 20:40:12.417495   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 20:40:12.423165   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 20:40:12.428860   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 20:40:12.435374   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 20:40:12.441545   39149 kubeadm.go:391] StartCluster: {Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:40:12.441681   39149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 20:40:12.441727   39149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 20:40:12.515928   39149 cri.go:89] found id: "3c021ec12933d9321a7393dfad4f45b7d05ffc04c4c8954c28e02082e86c1306"
	I0612 20:40:12.515947   39149 cri.go:89] found id: "944c3d1c25165f196a8d630dc945dc1a4162fb8a11f750259dd23974392b5a8c"
	I0612 20:40:12.515951   39149 cri.go:89] found id: "09c8070fe3b658046a9a19733813849b24fa6b99ac5080e9c92e4865b4b3cdc3"
	I0612 20:40:12.515954   39149 cri.go:89] found id: "ed87fc57398ca349ce32bc4fcea61bb7ede6451b9fe8db63349ef7ee6151bd50"
	I0612 20:40:12.515957   39149 cri.go:89] found id: "5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0"
	I0612 20:40:12.515962   39149 cri.go:89] found id: "6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff"
	I0612 20:40:12.515965   39149 cri.go:89] found id: "63a8f38c6abf70e91806516f6efb3aec847188dad6c91439ca9660d95029a3e6"
	I0612 20:40:12.515967   39149 cri.go:89] found id: "b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c"
	I0612 20:40:12.515970   39149 cri.go:89] found id: "cd52024c12a2b486d52b8f6803360b3172fb54227b17758bbd09a2e22dc32163"
	I0612 20:40:12.515974   39149 cri.go:89] found id: "6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d"
	I0612 20:40:12.515977   39149 cri.go:89] found id: "223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92"
	I0612 20:40:12.515979   39149 cri.go:89] found id: "41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a"
	I0612 20:40:12.515981   39149 cri.go:89] found id: "1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf"
	I0612 20:40:12.515984   39149 cri.go:89] found id: ""
	I0612 20:40:12.516032   39149 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.640137305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c939bfd-664e-49ee-a780-42b12942a1b7 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.641772945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94c046c1-afdf-449d-a944-038ef2a5076e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.642962987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718224991642883291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94c046c1-afdf-449d-a944-038ef2a5076e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.643907404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=791c209f-886a-4573-abdf-912db72b9b0d name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.644003100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=791c209f-886a-4573-abdf-912db72b9b0d name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.644686772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7032c9d518b83b22af1468d51f671cd78fe893958d313f9a62c6310e07e5eb6c,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718224899813940041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f432160e35b26c7b012ec4edfd7d00508fb15c4cc8f9547df1507fa19a6dabee,PodSandboxId:2791c645324815b106b820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718224879817848716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe7677b75490eb9a887b3192e914a38bbc5fd772111c9a731fd0c67b961eea,PodSandboxId:7ad55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224860802741608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c4a2d567239f2cf47396cba150c895012356b8ff9c055eafd3490a6316c791,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224856808159694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705c2561cc55952a4ac898f54cc34444e53d2f4bdfa63cf7bd8c2ebb56472f73,PodSandboxId:641e7ec9022152f82e52e566a21ce495ad6fccbd26b6cd0a919ea39bd3bc1dea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224851037770649,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd3375cfa65cdf6427956610a22c5ad458ab15dcb4c60281d661e3b46f921ce,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718224849810548318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8445efcf36094d2712e6d7eeebc0e6b73520b6f1f530e37bbf40c8108e6e326e,PodSandboxId:b9d9b289b932c027eadfd224d1f9763c600e3cd5b391176fe10b1d15c75c0302,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718224832658819585,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578a6adb37c07fb3ddb14c1b9f4fcd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:847d6ff92e8e601118971db1953ddd8cd8fd05b8a16cb89aef9e6bf5c67a8426,PodSandboxId:125f3e7aa763c8c93918780c5657199e412c7d2ff7c89b4c9599b1b8c13ab2fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718224817714758801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ff254
1bb480d1e29fe0cdcb21ac962bbb63edc50c303d905d5df9c801bb3f,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718224817578288212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4350e829162646be417b019b2cb971
ff3a4548b2e65be4e5d7cc77a69a513de1,PodSandboxId:7156712f8ff2d4b1d06493d07a671bf6c4cf93c4fa5f096208275e7832fc39de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224817463520758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d98b9e0f5051ab363ca02821c8f8d231f5298a04d44f3f40a1ac8a145a70e570,PodSandboxId:7ad
55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718224817496833882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82299b1981f41471feb0a36cd022834e98c7a620a668655d739be255454304da,PodSandboxI
d:a2cb079d37a3df3a47fa418b51318b536fcacbe99a2d5d5e64178be7ae8c9e95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224817415619422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf561296e89021cfeb3942a411fdb1a39d363d089d6c0e3abc9f21a0ed0a02b,PodSandboxId:2791c645324815b106b
820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718224812970582901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209b6c2b28de4f9be36a8b96a42fd0658f8741138b54758c0a4036332c38a03b,PodSandboxId:40c46a3d0827b647af9e44003959e84272fa458e2637139dc12e33
0df8ecc125,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812829188315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:177da54ccde0b134f353821e30d94d485a45f9d5c67619d03d4ff3935aed495d,PodSandboxId:1c7b0383df5e6c2039396c35f89b50155ef1ff7d02214ba0dd246af1bfc68f23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812774314531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718224321149910871,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kuberne
tes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119278718424,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119207439239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718224113786859746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718224093469660512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718224093415553992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=791c209f-886a-4573-abdf-912db72b9b0d name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.703340699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e53f971a-ff2d-4ef7-894e-4ee77fdb0dc7 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.703483704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e53f971a-ff2d-4ef7-894e-4ee77fdb0dc7 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.705816876Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd9bcafd-7b13-4b0b-b602-97c198ec65dd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.706518536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718224991706482397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd9bcafd-7b13-4b0b-b602-97c198ec65dd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.707801795Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9e03a16-390f-4d1c-8aef-18fbc06981d1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.707882828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9e03a16-390f-4d1c-8aef-18fbc06981d1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.709361725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7032c9d518b83b22af1468d51f671cd78fe893958d313f9a62c6310e07e5eb6c,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718224899813940041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f432160e35b26c7b012ec4edfd7d00508fb15c4cc8f9547df1507fa19a6dabee,PodSandboxId:2791c645324815b106b820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718224879817848716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe7677b75490eb9a887b3192e914a38bbc5fd772111c9a731fd0c67b961eea,PodSandboxId:7ad55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224860802741608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c4a2d567239f2cf47396cba150c895012356b8ff9c055eafd3490a6316c791,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224856808159694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705c2561cc55952a4ac898f54cc34444e53d2f4bdfa63cf7bd8c2ebb56472f73,PodSandboxId:641e7ec9022152f82e52e566a21ce495ad6fccbd26b6cd0a919ea39bd3bc1dea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224851037770649,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd3375cfa65cdf6427956610a22c5ad458ab15dcb4c60281d661e3b46f921ce,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718224849810548318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8445efcf36094d2712e6d7eeebc0e6b73520b6f1f530e37bbf40c8108e6e326e,PodSandboxId:b9d9b289b932c027eadfd224d1f9763c600e3cd5b391176fe10b1d15c75c0302,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718224832658819585,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578a6adb37c07fb3ddb14c1b9f4fcd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:847d6ff92e8e601118971db1953ddd8cd8fd05b8a16cb89aef9e6bf5c67a8426,PodSandboxId:125f3e7aa763c8c93918780c5657199e412c7d2ff7c89b4c9599b1b8c13ab2fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718224817714758801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ff254
1bb480d1e29fe0cdcb21ac962bbb63edc50c303d905d5df9c801bb3f,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718224817578288212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4350e829162646be417b019b2cb971
ff3a4548b2e65be4e5d7cc77a69a513de1,PodSandboxId:7156712f8ff2d4b1d06493d07a671bf6c4cf93c4fa5f096208275e7832fc39de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224817463520758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d98b9e0f5051ab363ca02821c8f8d231f5298a04d44f3f40a1ac8a145a70e570,PodSandboxId:7ad
55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718224817496833882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82299b1981f41471feb0a36cd022834e98c7a620a668655d739be255454304da,PodSandboxI
d:a2cb079d37a3df3a47fa418b51318b536fcacbe99a2d5d5e64178be7ae8c9e95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224817415619422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf561296e89021cfeb3942a411fdb1a39d363d089d6c0e3abc9f21a0ed0a02b,PodSandboxId:2791c645324815b106b
820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718224812970582901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209b6c2b28de4f9be36a8b96a42fd0658f8741138b54758c0a4036332c38a03b,PodSandboxId:40c46a3d0827b647af9e44003959e84272fa458e2637139dc12e33
0df8ecc125,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812829188315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:177da54ccde0b134f353821e30d94d485a45f9d5c67619d03d4ff3935aed495d,PodSandboxId:1c7b0383df5e6c2039396c35f89b50155ef1ff7d02214ba0dd246af1bfc68f23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812774314531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718224321149910871,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kuberne
tes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119278718424,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119207439239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718224113786859746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718224093469660512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718224093415553992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9e03a16-390f-4d1c-8aef-18fbc06981d1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.767991373Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e864226-1566-4645-997a-2205e46bdb7f name=/runtime.v1.RuntimeService/Version
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.768087202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e864226-1566-4645-997a-2205e46bdb7f name=/runtime.v1.RuntimeService/Version
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.769525973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=caff6967-2d08-41e7-b167-195c4c6b448d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.769962138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718224991769939598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=caff6967-2d08-41e7-b167-195c4c6b448d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.770760980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28eb8029-66ef-4c8e-a0e5-c96e5a11f15d name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.770812731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28eb8029-66ef-4c8e-a0e5-c96e5a11f15d name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.771299249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7032c9d518b83b22af1468d51f671cd78fe893958d313f9a62c6310e07e5eb6c,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718224899813940041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f432160e35b26c7b012ec4edfd7d00508fb15c4cc8f9547df1507fa19a6dabee,PodSandboxId:2791c645324815b106b820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718224879817848716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe7677b75490eb9a887b3192e914a38bbc5fd772111c9a731fd0c67b961eea,PodSandboxId:7ad55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224860802741608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c4a2d567239f2cf47396cba150c895012356b8ff9c055eafd3490a6316c791,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224856808159694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705c2561cc55952a4ac898f54cc34444e53d2f4bdfa63cf7bd8c2ebb56472f73,PodSandboxId:641e7ec9022152f82e52e566a21ce495ad6fccbd26b6cd0a919ea39bd3bc1dea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224851037770649,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd3375cfa65cdf6427956610a22c5ad458ab15dcb4c60281d661e3b46f921ce,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718224849810548318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8445efcf36094d2712e6d7eeebc0e6b73520b6f1f530e37bbf40c8108e6e326e,PodSandboxId:b9d9b289b932c027eadfd224d1f9763c600e3cd5b391176fe10b1d15c75c0302,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718224832658819585,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578a6adb37c07fb3ddb14c1b9f4fcd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:847d6ff92e8e601118971db1953ddd8cd8fd05b8a16cb89aef9e6bf5c67a8426,PodSandboxId:125f3e7aa763c8c93918780c5657199e412c7d2ff7c89b4c9599b1b8c13ab2fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718224817714758801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ff254
1bb480d1e29fe0cdcb21ac962bbb63edc50c303d905d5df9c801bb3f,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718224817578288212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4350e829162646be417b019b2cb971
ff3a4548b2e65be4e5d7cc77a69a513de1,PodSandboxId:7156712f8ff2d4b1d06493d07a671bf6c4cf93c4fa5f096208275e7832fc39de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224817463520758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d98b9e0f5051ab363ca02821c8f8d231f5298a04d44f3f40a1ac8a145a70e570,PodSandboxId:7ad
55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718224817496833882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82299b1981f41471feb0a36cd022834e98c7a620a668655d739be255454304da,PodSandboxI
d:a2cb079d37a3df3a47fa418b51318b536fcacbe99a2d5d5e64178be7ae8c9e95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224817415619422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf561296e89021cfeb3942a411fdb1a39d363d089d6c0e3abc9f21a0ed0a02b,PodSandboxId:2791c645324815b106b
820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718224812970582901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209b6c2b28de4f9be36a8b96a42fd0658f8741138b54758c0a4036332c38a03b,PodSandboxId:40c46a3d0827b647af9e44003959e84272fa458e2637139dc12e33
0df8ecc125,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812829188315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:177da54ccde0b134f353821e30d94d485a45f9d5c67619d03d4ff3935aed495d,PodSandboxId:1c7b0383df5e6c2039396c35f89b50155ef1ff7d02214ba0dd246af1bfc68f23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812774314531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718224321149910871,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kuberne
tes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119278718424,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119207439239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718224113786859746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718224093469660512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718224093415553992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28eb8029-66ef-4c8e-a0e5-c96e5a11f15d name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.791136798Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54773ad7-746d-4a01-8461-de9b73f0b559 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.791651385Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:641e7ec9022152f82e52e566a21ce495ad6fccbd26b6cd0a919ea39bd3bc1dea,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-bdzsx,Uid:74f96190-8d97-478c-b01d-de61520289be,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718224850913681132,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:31:58.083744941Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b9d9b289b932c027eadfd224d1f9763c600e3cd5b391176fe10b1d15c75c0302,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-844626,Uid:0578a6adb37c07fb3ddb14c1b9f4fcd3,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1718224832548851147,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578a6adb37c07fb3ddb14c1b9f4fcd3,},Annotations:map[string]string{kubernetes.io/config.hash: 0578a6adb37c07fb3ddb14c1b9f4fcd3,kubernetes.io/config.seen: 2024-06-12T20:40:11.307653344Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7ad55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-844626,Uid:48a4dcb0404b2818e4d9a3c344a7e5d6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718224817205599944,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/con
fig.hash: 48a4dcb0404b2818e4d9a3c344a7e5d6,kubernetes.io/config.seen: 2024-06-12T20:28:19.783175334Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:125f3e7aa763c8c93918780c5657199e412c7d2ff7c89b4c9599b1b8c13ab2fb,Metadata:&PodSandboxMetadata{Name:kube-proxy-69ctp,Uid:c66149e8-2a69-4f1f-9ddc-5e272204e6f5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718224817201278053,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:28:33.229353463Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-844626,Uid:5d96acdf137cf3b5a36cb1641ff47f87,Namespace
:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718224817200119141,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.196:8443,kubernetes.io/config.hash: 5d96acdf137cf3b5a36cb1641ff47f87,kubernetes.io/config.seen: 2024-06-12T20:28:19.783174386Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d94c16d7-da82-41e3-82fe-83ed6e581f69,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718224817184785186,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kuberne
tes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-12T20:28:38.623962154Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2cb079d37a3df3a47f
a418b51318b536fcacbe99a2d5d5e64178be7ae8c9e95,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-844626,Uid:f6a445b2a0c4cdfeb60569362c5f7933,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718224817158370847,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f6a445b2a0c4cdfeb60569362c5f7933,kubernetes.io/config.seen: 2024-06-12T20:28:19.783168073Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7156712f8ff2d4b1d06493d07a671bf6c4cf93c4fa5f096208275e7832fc39de,Metadata:&PodSandboxMetadata{Name:etcd-ha-844626,Uid:5eeb7c1880efee41beff2f38986d6a2f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718224817156950316,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.
name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.196:2379,kubernetes.io/config.hash: 5eeb7c1880efee41beff2f38986d6a2f,kubernetes.io/config.seen: 2024-06-12T20:28:19.783173253Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:40c46a3d0827b647af9e44003959e84272fa458e2637139dc12e330df8ecc125,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqzvn,Uid:b22b3ba0-1a59-4066-9db5-380986d73dca,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718224812585798170,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:28:38.6328
55885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c7b0383df5e6c2039396c35f89b50155ef1ff7d02214ba0dd246af1bfc68f23,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-lxd6n,Uid:65d25d78-6fa7-4dc7-9cf2-e2fac796f194,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718224812543502695,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:28:38.737567040Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2791c645324815b106b820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&PodSandboxMetadata{Name:kindnet-mthnq,Uid:49950bb0-368d-4239-ae93-04c980a8b531,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718224812475723501,Labels:map[string]string{app: kindnet,controlle
r-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:28:33.221392803Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=54773ad7-746d-4a01-8461-de9b73f0b559 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.792366661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b782dd10-3a5f-4dfc-a50e-924afceb88b2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.792438538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b782dd10-3a5f-4dfc-a50e-924afceb88b2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:43:11 ha-844626 crio[3826]: time="2024-06-12 20:43:11.792651751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7032c9d518b83b22af1468d51f671cd78fe893958d313f9a62c6310e07e5eb6c,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718224899813940041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f432160e35b26c7b012ec4edfd7d00508fb15c4cc8f9547df1507fa19a6dabee,PodSandboxId:2791c645324815b106b820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718224879817848716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe7677b75490eb9a887b3192e914a38bbc5fd772111c9a731fd0c67b961eea,PodSandboxId:7ad55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224860802741608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c4a2d567239f2cf47396cba150c895012356b8ff9c055eafd3490a6316c791,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224856808159694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705c2561cc55952a4ac898f54cc34444e53d2f4bdfa63cf7bd8c2ebb56472f73,PodSandboxId:641e7ec9022152f82e52e566a21ce495ad6fccbd26b6cd0a919ea39bd3bc1dea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224851037770649,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8445efcf36094d2712e6d7eeebc0e6b73520b6f1f530e37bbf40c8108e6e326e,PodSandboxId:b9d9b289b932c027eadfd224d1f9763c600e3cd5b391176fe10b1d15c75c0302,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718224832658819585,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578a6adb37c07fb3ddb14c1b9f4fcd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:847d6ff92e8e601118971db1953ddd8cd8fd05b8a16cb89aef9e6bf5c67a8426,PodSandboxId:125f3e7aa763c8c93918780c5657199e412c7d2ff7c89b4c9599b1b8c13ab2fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718224817714758801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:4350e829162646be417b019b2cb971ff3a4548b2e65be4e5d7cc77a69a513de1,PodSandboxId:7156712f8ff2d4b1d06493d07a671bf6c4cf93c4fa5f096208275e7832fc39de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224817463520758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82299b1981f41471feb0a36cd022834e98c7a620a66865
5d739be255454304da,PodSandboxId:a2cb079d37a3df3a47fa418b51318b536fcacbe99a2d5d5e64178be7ae8c9e95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224817415619422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209b6c2b28de4f9be36a8b96a42fd0658f8741138b54758c0a4036332c38a03b,Po
dSandboxId:40c46a3d0827b647af9e44003959e84272fa458e2637139dc12e330df8ecc125,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812829188315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:177da54ccde0b134f353821e30d94d485a45f9d5c67619d03d4ff3935aed495d,PodSandboxId:1c7b0383df5e6c2039396c35f89b50155ef1ff7d02214ba0dd246af1bfc68f23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812774314531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort
\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b782dd10-3a5f-4dfc-a50e-924afceb88b2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7032c9d518b83       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   5c95de2f00554       storage-provisioner
	f432160e35b26       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               3                   2791c64532481       kindnet-mthnq
	27fe7677b7549       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Running             kube-controller-manager   2                   7ad55e7c88ed2       kube-controller-manager-ha-844626
	64c4a2d567239       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Running             kube-apiserver            3                   ff49270d85d97       kube-apiserver-ha-844626
	705c2561cc559       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   641e7ec902215       busybox-fc5497c4f-bdzsx
	6cd3375cfa65c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   5c95de2f00554       storage-provisioner
	8445efcf36094       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   b9d9b289b932c       kube-vip-ha-844626
	847d6ff92e8e6       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      2 minutes ago        Running             kube-proxy                1                   125f3e7aa763c       kube-proxy-69ctp
	c99ff2541bb48       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Exited              kube-apiserver            2                   ff49270d85d97       kube-apiserver-ha-844626
	d98b9e0f5051a       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Exited              kube-controller-manager   1                   7ad55e7c88ed2       kube-controller-manager-ha-844626
	4350e82916264       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   7156712f8ff2d       etcd-ha-844626
	82299b1981f41       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      2 minutes ago        Running             kube-scheduler            1                   a2cb079d37a3d       kube-scheduler-ha-844626
	ecf561296e890       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      2 minutes ago        Exited              kindnet-cni               2                   2791c64532481       kindnet-mthnq
	209b6c2b28de4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   40c46a3d0827b       coredns-7db6d8ff4d-bqzvn
	177da54ccde0b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   1c7b0383df5e6       coredns-7db6d8ff4d-lxd6n
	ccf4b3ead47f7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   61e1e7d7b51fb       busybox-fc5497c4f-bdzsx
	5eb15a71cbeec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   43f0b5e0d015c       coredns-7db6d8ff4d-lxd6n
	6f896bc7211fd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   5dcd51ad312e1       coredns-7db6d8ff4d-bqzvn
	b028950fdf37b       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      14 minutes ago       Exited              kube-proxy                0                   4e233e0bc3bb7       kube-proxy-69ctp
	6255c7db8bcf2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   b0297d465b251       etcd-ha-844626
	223d45eb38f84       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      14 minutes ago       Exited              kube-scheduler            0                   5512a35ec1cf1       kube-scheduler-ha-844626
	
	
	==> coredns [177da54ccde0b134f353821e30d94d485a45f9d5c67619d03d4ff3935aed495d] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1824826505]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Jun-2024 20:40:21.024) (total time: 10001ms):
	Trace[1824826505]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:40:31.026)
	Trace[1824826505]: [10.001539554s] [10.001539554s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:51602->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:51602->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [209b6c2b28de4f9be36a8b96a42fd0658f8741138b54758c0a4036332c38a03b] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1170774030]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Jun-2024 20:40:19.764) (total time: 10001ms):
	Trace[1170774030]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:40:29.765)
	Trace[1170774030]: [10.001659789s] [10.001659789s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1540583505]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Jun-2024 20:40:23.841) (total time: 10001ms):
	Trace[1540583505]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:40:33.842)
	Trace[1540583505]: [10.0015879s] [10.0015879s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57402->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57402->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57410->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57410->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0] <==
	[INFO] 10.244.2.2:46088 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001813687s
	[INFO] 10.244.2.2:41288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099916s
	[INFO] 10.244.2.2:50111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001353864s
	[INFO] 10.244.2.2:58718 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071988s
	[INFO] 10.244.2.2:53104 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063402s
	[INFO] 10.244.2.2:33504 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000200272s
	[INFO] 10.244.0.4:57974 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068404s
	[INFO] 10.244.1.2:36180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000396478s
	[INFO] 10.244.1.2:44974 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143897s
	[INFO] 10.244.2.2:45916 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153283s
	[INFO] 10.244.2.2:54255 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107674s
	[INFO] 10.244.2.2:37490 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120001s
	[INFO] 10.244.2.2:35084 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008018s
	[INFO] 10.244.0.4:39477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000273278s
	[INFO] 10.244.1.2:48205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158614s
	[INFO] 10.244.1.2:59881 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158202s
	[INFO] 10.244.1.2:35567 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000472197s
	[INFO] 10.244.1.2:56490 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000211826s
	[INFO] 10.244.2.2:48246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156952s
	[INFO] 10.244.2.2:43466 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117313s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=2004&timeout=5m58s&timeoutSeconds=358&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=2002&timeout=9m58s&timeoutSeconds=598&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=2004&timeout=8m46s&timeoutSeconds=526&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff] <==
	[INFO] 10.244.0.4:56242 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009694s
	[INFO] 10.244.0.4:50224 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170892s
	[INFO] 10.244.0.4:50347 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139284s
	[INFO] 10.244.0.4:43967 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.022155051s
	[INFO] 10.244.0.4:34878 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000206851s
	[INFO] 10.244.1.2:46797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00034142s
	[INFO] 10.244.1.2:43369 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000248825s
	[INFO] 10.244.1.2:56650 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001632154s
	[INFO] 10.244.2.2:38141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172487s
	[INFO] 10.244.2.2:60906 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158767s
	[INFO] 10.244.0.4:40480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117274s
	[INFO] 10.244.0.4:47149 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000771s
	[INFO] 10.244.0.4:56834 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000323893s
	[INFO] 10.244.1.2:44664 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146272s
	[INFO] 10.244.1.2:47748 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110683s
	[INFO] 10.244.0.4:39510 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159779s
	[INFO] 10.244.0.4:49210 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125351s
	[INFO] 10.244.0.4:48326 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000179032s
	[INFO] 10.244.2.2:38296 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150584s
	[INFO] 10.244.2.2:58162 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116767s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-844626
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T20_28_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:43:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:41:00 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:41:00 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:41:00 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:41:00 +0000   Wed, 12 Jun 2024 20:28:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    ha-844626
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca8d79507bbc4f44bf947af92833058f
	  System UUID:                ca8d7950-7bbc-4f44-bf94-7af92833058f
	  Boot ID:                    da0f0a2a-5126-4bca-9f1f-744b30254ff4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bdzsx              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-bqzvn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-lxd6n             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-844626                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-mthnq                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-844626             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-844626    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-69ctp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-844626             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-844626                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m11s              kube-proxy       
	  Normal   Starting                 14m                kube-proxy       
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-844626 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-844626 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-844626 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m                kubelet          Node ha-844626 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node ha-844626 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node ha-844626 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal   NodeReady                14m                kubelet          Node ha-844626 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Warning  ContainerGCFailed        3m53s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m2s               node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal   RegisteredNode           119s               node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal   RegisteredNode           31s                node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	
	
	Name:               ha-844626-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T20_30_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:30:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:43:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:41:43 +0000   Wed, 12 Jun 2024 20:41:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:41:43 +0000   Wed, 12 Jun 2024 20:41:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:41:43 +0000   Wed, 12 Jun 2024 20:41:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:41:43 +0000   Wed, 12 Jun 2024 20:41:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    ha-844626-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc34ec9a17c449479c11e07f628f1a6e
	  System UUID:                fc34ec9a-17c4-4947-9c11-e07f628f1a6e
	  Boot ID:                    46eea217-77e1-490e-ade1-0905b3fafd17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bh59q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-844626-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-fz6bl                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-844626-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-844626-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-f7ct8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-844626-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-844626-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 103s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-844626-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-844626-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-844626-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  NodeNotReady             9m30s                  node-controller  Node ha-844626-m02 status is now: NodeNotReady
	  Normal  Starting                 2m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node ha-844626-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node ha-844626-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s (x7 over 2m38s)  kubelet          Node ha-844626-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m2s                   node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           119s                   node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           31s                    node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	
	
	Name:               ha-844626-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T20_31_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:31:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:43:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:42:43 +0000   Wed, 12 Jun 2024 20:42:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:42:43 +0000   Wed, 12 Jun 2024 20:42:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:42:43 +0000   Wed, 12 Jun 2024 20:42:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:42:43 +0000   Wed, 12 Jun 2024 20:42:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-844626-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e6bf394d9ac40219e8a5de4a5d52b0f
	  System UUID:                1e6bf394-d9ac-4021-9e8a-5de4a5d52b0f
	  Boot ID:                    9e679953-1347-41a8-acbd-dfe10f70a978
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dhw8h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-844626-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-8hdxz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-844626-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-844626-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-2clg8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-844626-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-844626-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 42s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-844626-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-844626-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-844626-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	  Normal   RegisteredNode           2m2s               node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	  Normal   RegisteredNode           119s               node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	  Normal   NodeNotReady             82s                node-controller  Node ha-844626-m03 status is now: NodeNotReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s (x2 over 60s)  kubelet          Node ha-844626-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x2 over 60s)  kubelet          Node ha-844626-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x2 over 60s)  kubelet          Node ha-844626-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 60s                kubelet          Node ha-844626-m03 has been rebooted, boot id: 9e679953-1347-41a8-acbd-dfe10f70a978
	  Normal   NodeReady                60s                kubelet          Node ha-844626-m03 status is now: NodeReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-844626-m03 event: Registered Node ha-844626-m03 in Controller
	
	
	Name:               ha-844626-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T20_32_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:43:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:43:04 +0000   Wed, 12 Jun 2024 20:43:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:43:04 +0000   Wed, 12 Jun 2024 20:43:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:43:04 +0000   Wed, 12 Jun 2024 20:43:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:43:04 +0000   Wed, 12 Jun 2024 20:43:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    ha-844626-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 76e9ad048f36466a8cb780349dbd0fce
	  System UUID:                76e9ad04-8f36-466a-8cb7-80349dbd0fce
	  Boot ID:                    5ccfdbf7-4568-4904-ac71-2a48c42eb716
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pwr4p       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-dbk2r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-844626-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-844626-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-844626-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-844626-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m2s               node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   RegisteredNode           119s               node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   NodeNotReady             82s                node-controller  Node ha-844626-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-844626-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-844626-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-844626-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-844626-m04 has been rebooted, boot id: 5ccfdbf7-4568-4904-ac71-2a48c42eb716
	  Normal   NodeReady                8s                 kubelet          Node ha-844626-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.063983] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073055] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.159207] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.152158] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.286482] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.221083] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.069110] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.063782] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.293152] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.089558] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.977157] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.420198] kauditd_printk_skb: 38 callbacks suppressed
	[Jun12 20:30] kauditd_printk_skb: 26 callbacks suppressed
	[Jun12 20:40] systemd-fstab-generator[3745]: Ignoring "noauto" option for root device
	[  +0.151873] systemd-fstab-generator[3757]: Ignoring "noauto" option for root device
	[  +0.179450] systemd-fstab-generator[3771]: Ignoring "noauto" option for root device
	[  +0.147532] systemd-fstab-generator[3783]: Ignoring "noauto" option for root device
	[  +0.285076] systemd-fstab-generator[3811]: Ignoring "noauto" option for root device
	[  +3.505544] systemd-fstab-generator[3914]: Ignoring "noauto" option for root device
	[  +1.298931] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.066926] kauditd_printk_skb: 73 callbacks suppressed
	[ +14.874362] kauditd_printk_skb: 15 callbacks suppressed
	[ +23.997592] kauditd_printk_skb: 5 callbacks suppressed
	[Jun12 20:41] kauditd_printk_skb: 3 callbacks suppressed
	[ +30.195544] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [4350e829162646be417b019b2cb971ff3a4548b2e65be4e5d7cc77a69a513de1] <==
	{"level":"warn","ts":"2024-06-12T20:42:07.554048Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T20:42:07.645574Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.76:2380/version","remote-member-id":"d724031a215d8a63","error":"Get \"https://192.168.39.76:2380/version\": dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:07.645662Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d724031a215d8a63","error":"Get \"https://192.168.39.76:2380/version\": dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:08.38686Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d724031a215d8a63","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-06-12T20:42:08.387043Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d724031a215d8a63","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-06-12T20:42:11.647852Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.76:2380/version","remote-member-id":"d724031a215d8a63","error":"Get \"https://192.168.39.76:2380/version\": dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:11.647931Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d724031a215d8a63","error":"Get \"https://192.168.39.76:2380/version\": dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:13.387164Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d724031a215d8a63","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:13.38727Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d724031a215d8a63","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:15.650735Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.76:2380/version","remote-member-id":"d724031a215d8a63","error":"Get \"https://192.168.39.76:2380/version\": dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:15.650817Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d724031a215d8a63","error":"Get \"https://192.168.39.76:2380/version\": dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:18.387514Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d724031a215d8a63","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:18.387631Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d724031a215d8a63","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:19.654053Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.76:2380/version","remote-member-id":"d724031a215d8a63","error":"Get \"https://192.168.39.76:2380/version\": dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:19.654123Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"d724031a215d8a63","error":"Get \"https://192.168.39.76:2380/version\": dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-12T20:42:22.807698Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:42:22.808879Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:42:22.809862Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:42:22.824165Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a14f9258d3b66c75","to":"d724031a215d8a63","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-12T20:42:22.824364Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:42:22.833375Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a14f9258d3b66c75","to":"d724031a215d8a63","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-12T20:42:22.833515Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"warn","ts":"2024-06-12T20:42:23.388567Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d724031a215d8a63","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:23.388707Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d724031a215d8a63","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-12T20:42:36.488745Z","caller":"traceutil/trace.go:171","msg":"trace[1345592607] transaction","detail":"{read_only:false; response_revision:2638; number_of_response:1; }","duration":"155.189584ms","start":"2024-06-12T20:42:36.333517Z","end":"2024-06-12T20:42:36.488707Z","steps":["trace[1345592607] 'process raft request'  (duration: 155.021037ms)"],"step_count":1}
	
	
	==> etcd [6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d] <==
	{"level":"info","ts":"2024-06-12T20:38:35.687945Z","caller":"traceutil/trace.go:171","msg":"trace[892167329] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; }","duration":"858.920602ms","start":"2024-06-12T20:38:34.829017Z","end":"2024-06-12T20:38:35.687937Z","steps":["trace[892167329] 'agreement among raft nodes before linearized reading'  (duration: 853.724812ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:38:35.687958Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:38:34.829013Z","time spent":"858.940572ms","remote":"127.0.0.1:40880","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 "}
	2024/06/12 20:38:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-06-12T20:38:35.687553Z","caller":"traceutil/trace.go:171","msg":"trace[1513091851] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"863.128292ms","start":"2024-06-12T20:38:34.824414Z","end":"2024-06-12T20:38:35.687543Z","steps":["trace[1513091851] 'agreement among raft nodes before linearized reading'  (duration: 858.150993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:38:35.691771Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:38:34.824411Z","time spent":"867.164025ms","remote":"127.0.0.1:40990","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:10000 "}
	2024/06/12 20:38:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-12T20:38:35.806633Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":7815311118762690248,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-06-12T20:38:35.809183Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a14f9258d3b66c75","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-12T20:38:35.809447Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809463Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809487Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809658Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809712Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809776Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.80981Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809819Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.809832Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.80985Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.809919Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.809965Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.810015Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.810027Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.813128Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-06-12T20:38:35.813451Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-06-12T20:38:35.813492Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-844626","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"]}
	
	
	==> kernel <==
	 20:43:12 up 15 min,  0 users,  load average: 1.69, 1.29, 0.70
	Linux ha-844626 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ecf561296e89021cfeb3942a411fdb1a39d363d089d6c0e3abc9f21a0ed0a02b] <==
	I0612 20:40:13.415752       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0612 20:40:13.415920       1 main.go:107] hostIP = 192.168.39.196
	podIP = 192.168.39.196
	I0612 20:40:13.416128       1 main.go:116] setting mtu 1500 for CNI 
	I0612 20:40:13.416177       1 main.go:146] kindnetd IP family: "ipv4"
	I0612 20:40:13.416277       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0612 20:40:13.718903       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0612 20:40:13.719486       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0612 20:40:19.104394       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0612 20:40:22.172348       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0612 20:40:35.174055       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [f432160e35b26c7b012ec4edfd7d00508fb15c4cc8f9547df1507fa19a6dabee] <==
	I0612 20:42:40.935766       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:42:50.947674       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:42:50.947711       1 main.go:227] handling current node
	I0612 20:42:50.947722       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:42:50.947727       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:42:50.947895       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0612 20:42:50.947919       1 main.go:250] Node ha-844626-m03 has CIDR [10.244.2.0/24] 
	I0612 20:42:50.948030       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:42:50.948059       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:43:00.963807       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:43:00.964029       1 main.go:227] handling current node
	I0612 20:43:00.964080       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:43:00.964112       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:43:00.964349       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0612 20:43:00.964392       1 main.go:250] Node ha-844626-m03 has CIDR [10.244.2.0/24] 
	I0612 20:43:00.964465       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:43:00.964483       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:43:10.981929       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:43:10.981958       1 main.go:227] handling current node
	I0612 20:43:10.981972       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:43:10.981978       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:43:11.014648       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0612 20:43:11.014869       1 main.go:250] Node ha-844626-m03 has CIDR [10.244.2.0/24] 
	I0612 20:43:11.015000       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:43:11.015024       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [64c4a2d567239f2cf47396cba150c895012356b8ff9c055eafd3490a6316c791] <==
	I0612 20:40:58.647073       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0612 20:40:58.647616       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 20:40:58.647727       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 20:40:58.734804       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 20:40:58.734838       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 20:40:58.743881       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 20:40:58.744703       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 20:40:58.745376       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 20:40:58.762702       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 20:40:58.763539       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 20:40:58.763848       1 aggregator.go:165] initial CRD sync complete...
	I0612 20:40:58.764036       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 20:40:58.764150       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 20:40:58.764248       1 cache.go:39] Caches are synced for autoregister controller
	I0612 20:40:58.767308       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0612 20:40:58.776570       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.76]
	I0612 20:40:58.787779       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 20:40:58.790150       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 20:40:58.790306       1 policy_source.go:224] refreshing policies
	I0612 20:40:58.863062       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 20:40:58.880004       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 20:40:58.892022       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0612 20:40:58.896507       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0612 20:40:59.650063       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0612 20:41:00.017192       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.108 192.168.39.196 192.168.39.76]
	
	
	==> kube-apiserver [c99ff2541bb480d1e29fe0cdcb21ac962bbb63edc50c303d905d5df9c801bb3f] <==
	I0612 20:40:18.011157       1 options.go:221] external host was not specified, using 192.168.39.196
	I0612 20:40:18.012170       1 server.go:148] Version: v1.30.1
	I0612 20:40:18.012264       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:40:19.040471       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0612 20:40:19.041694       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0612 20:40:19.041727       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0612 20:40:19.041876       1 instance.go:299] Using reconciler: lease
	I0612 20:40:19.042293       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0612 20:40:39.037547       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0612 20:40:39.037547       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0612 20:40:39.042591       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [27fe7677b75490eb9a887b3192e914a38bbc5fd772111c9a731fd0c67b961eea] <==
	I0612 20:41:13.200771       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 20:41:13.343605       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 20:41:13.344334       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 20:41:13.358256       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 20:41:13.367081       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 20:41:13.375424       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 20:41:13.759567       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 20:41:13.759605       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 20:41:13.800184       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 20:41:19.027672       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-6xftl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-6xftl\": the object has been modified; please apply your changes to the latest version and try again"
	I0612 20:41:19.028287       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"ec4c1802-2189-49f6-b0eb-79e751e72b6c", APIVersion:"v1", ResourceVersion:"281", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-6xftl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-6xftl": the object has been modified; please apply your changes to the latest version and try again
	I0612 20:41:19.032791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.102713ms"
	I0612 20:41:19.033098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="126.411µs"
	I0612 20:41:29.478705       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.811938ms"
	I0612 20:41:29.479058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.549µs"
	I0612 20:41:39.077579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.161117ms"
	I0612 20:41:39.077975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="99.03µs"
	I0612 20:41:39.088899       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-6xftl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-6xftl\": the object has been modified; please apply your changes to the latest version and try again"
	I0612 20:41:39.089365       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"ec4c1802-2189-49f6-b0eb-79e751e72b6c", APIVersion:"v1", ResourceVersion:"281", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-6xftl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-6xftl": the object has been modified; please apply your changes to the latest version and try again
	I0612 20:41:50.457378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.235972ms"
	I0612 20:41:50.457764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="275.194µs"
	I0612 20:42:13.790777       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.552µs"
	I0612 20:42:33.256738       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.828858ms"
	I0612 20:42:33.256845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.405µs"
	I0612 20:43:04.245586       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844626-m04"
	
	
	==> kube-controller-manager [d98b9e0f5051ab363ca02821c8f8d231f5298a04d44f3f40a1ac8a145a70e570] <==
	I0612 20:40:19.075323       1 serving.go:380] Generated self-signed cert in-memory
	I0612 20:40:19.507560       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 20:40:19.507672       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:40:19.509538       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 20:40:19.509673       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 20:40:19.510258       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 20:40:19.510357       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0612 20:40:40.050959       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.196:8443/healthz\": dial tcp 192.168.39.196:8443: connect: connection refused"
	
	
	==> kube-proxy [847d6ff92e8e601118971db1953ddd8cd8fd05b8a16cb89aef9e6bf5c67a8426] <==
	I0612 20:40:19.621512       1 server_linux.go:69] "Using iptables proxy"
	E0612 20:40:20.252414       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-844626\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0612 20:40:23.324158       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-844626\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0612 20:40:26.396673       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-844626\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0612 20:40:32.541086       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-844626\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0612 20:40:41.756043       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-844626\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0612 20:41:00.940649       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.196"]
	I0612 20:41:01.072731       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:41:01.074736       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:41:01.074972       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:41:01.128326       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:41:01.128928       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:41:01.130334       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:41:01.153523       1 config.go:192] "Starting service config controller"
	I0612 20:41:01.153631       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:41:01.153750       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:41:01.153853       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:41:01.164088       1 config.go:319] "Starting node config controller"
	I0612 20:41:01.164148       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:41:01.254012       1 shared_informer.go:320] Caches are synced for service config
	I0612 20:41:01.254149       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 20:41:01.264498       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c] <==
	E0612 20:37:25.149289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:28.221435       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:28.221546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:28.221934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:28.222026       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:28.222290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:28.222376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:34.364149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:34.364340       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:34.364563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:34.364737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:34.364990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:34.365157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:43.580082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:43.580256       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:46.652328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:46.652392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:46.652915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:46.652984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:38:02.012602       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:38:02.012767       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:38:11.228580       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:38:11.228673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:38:11.228767       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:38:11.228813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92] <==
	W0612 20:38:33.275679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0612 20:38:33.275887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0612 20:38:33.493638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 20:38:33.493814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 20:38:33.525359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 20:38:33.525457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 20:38:33.712127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 20:38:33.712336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 20:38:33.797538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0612 20:38:33.797617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0612 20:38:34.221988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0612 20:38:34.222098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0612 20:38:34.273726       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0612 20:38:34.273777       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0612 20:38:34.715597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 20:38:34.715650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 20:38:34.919263       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0612 20:38:34.919311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 20:38:35.315461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 20:38:35.315559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 20:38:35.430126       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 20:38:35.430155       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 20:38:35.642096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:38:35.642125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:38:35.658901       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [82299b1981f41471feb0a36cd022834e98c7a620a668655d739be255454304da] <==
	W0612 20:40:48.866183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.196:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:48.866401       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.196:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:48.975835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.196:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:48.975949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.196:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:49.066597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:49.066708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:49.285735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.196:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:49.285795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.196:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:49.466421       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.196:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:49.466473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.196:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:49.711304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.196:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:49.711360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.196:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:50.141481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:50.141593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:56.548331       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:56.548402       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:56.915997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:56.916072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:58.657555       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 20:40:58.657643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 20:40:58.657895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0612 20:40:58.657945       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 20:40:58.657991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:40:58.657999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 20:40:59.363006       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 20:41:00 ha-844626 kubelet[1371]: I0612 20:41:00.187706    1371 status_manager.go:853] "Failed to get status for pod" podUID="65d25d78-6fa7-4dc7-9cf2-e2fac796f194" pod="kube-system/coredns-7db6d8ff4d-lxd6n" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lxd6n\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jun 12 20:41:00 ha-844626 kubelet[1371]: E0612 20:41:00.188305    1371 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-844626\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-844626?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jun 12 20:41:00 ha-844626 kubelet[1371]: I0612 20:41:00.789999    1371 scope.go:117] "RemoveContainer" containerID="d98b9e0f5051ab363ca02821c8f8d231f5298a04d44f3f40a1ac8a145a70e570"
	Jun 12 20:41:05 ha-844626 kubelet[1371]: I0612 20:41:05.789911    1371 scope.go:117] "RemoveContainer" containerID="ecf561296e89021cfeb3942a411fdb1a39d363d089d6c0e3abc9f21a0ed0a02b"
	Jun 12 20:41:05 ha-844626 kubelet[1371]: E0612 20:41:05.790169    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-mthnq_kube-system(49950bb0-368d-4239-ae93-04c980a8b531)\"" pod="kube-system/kindnet-mthnq" podUID="49950bb0-368d-4239-ae93-04c980a8b531"
	Jun 12 20:41:11 ha-844626 kubelet[1371]: I0612 20:41:11.789928    1371 scope.go:117] "RemoveContainer" containerID="6cd3375cfa65cdf6427956610a22c5ad458ab15dcb4c60281d661e3b46f921ce"
	Jun 12 20:41:11 ha-844626 kubelet[1371]: E0612 20:41:11.790116    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d94c16d7-da82-41e3-82fe-83ed6e581f69)\"" pod="kube-system/storage-provisioner" podUID="d94c16d7-da82-41e3-82fe-83ed6e581f69"
	Jun 12 20:41:12 ha-844626 kubelet[1371]: I0612 20:41:12.374925    1371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-bdzsx" podStartSLOduration=552.848611478 podStartE2EDuration="9m15.374893891s" podCreationTimestamp="2024-06-12 20:31:57 +0000 UTC" firstStartedPulling="2024-06-12 20:31:58.604799429 +0000 UTC m=+218.952002362" lastFinishedPulling="2024-06-12 20:32:01.131081833 +0000 UTC m=+221.478284775" observedRunningTime="2024-06-12 20:32:01.757291875 +0000 UTC m=+222.104494830" watchObservedRunningTime="2024-06-12 20:41:12.374893891 +0000 UTC m=+772.722096844"
	Jun 12 20:41:19 ha-844626 kubelet[1371]: I0612 20:41:19.790769    1371 scope.go:117] "RemoveContainer" containerID="ecf561296e89021cfeb3942a411fdb1a39d363d089d6c0e3abc9f21a0ed0a02b"
	Jun 12 20:41:19 ha-844626 kubelet[1371]: E0612 20:41:19.812296    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:41:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:41:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:41:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:41:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:41:25 ha-844626 kubelet[1371]: I0612 20:41:25.789837    1371 scope.go:117] "RemoveContainer" containerID="6cd3375cfa65cdf6427956610a22c5ad458ab15dcb4c60281d661e3b46f921ce"
	Jun 12 20:41:25 ha-844626 kubelet[1371]: E0612 20:41:25.790692    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d94c16d7-da82-41e3-82fe-83ed6e581f69)\"" pod="kube-system/storage-provisioner" podUID="d94c16d7-da82-41e3-82fe-83ed6e581f69"
	Jun 12 20:41:39 ha-844626 kubelet[1371]: I0612 20:41:39.790556    1371 scope.go:117] "RemoveContainer" containerID="6cd3375cfa65cdf6427956610a22c5ad458ab15dcb4c60281d661e3b46f921ce"
	Jun 12 20:42:07 ha-844626 kubelet[1371]: I0612 20:42:07.790440    1371 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-844626" podUID="654fd183-21b0-4df5-b557-ed676c5ecb71"
	Jun 12 20:42:07 ha-844626 kubelet[1371]: I0612 20:42:07.812508    1371 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-844626"
	Jun 12 20:42:08 ha-844626 kubelet[1371]: I0612 20:42:08.005452    1371 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-844626" podUID="654fd183-21b0-4df5-b557-ed676c5ecb71"
	Jun 12 20:42:19 ha-844626 kubelet[1371]: E0612 20:42:19.810138    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:42:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:42:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:42:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:42:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 20:43:11.230934   40637 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17779-14199/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-844626 -n ha-844626
helpers_test.go:261: (dbg) Run:  kubectl --context ha-844626 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (400.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 stop -v=7 --alsologtostderr
E0612 20:44:56.704626   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 stop -v=7 --alsologtostderr: exit status 82 (2m0.48175453s)

                                                
                                                
-- stdout --
	* Stopping node "ha-844626-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:43:32.226041   41053 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:43:32.226309   41053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:43:32.226320   41053 out.go:304] Setting ErrFile to fd 2...
	I0612 20:43:32.226324   41053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:43:32.226501   41053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:43:32.226730   41053 out.go:298] Setting JSON to false
	I0612 20:43:32.226798   41053 mustload.go:65] Loading cluster: ha-844626
	I0612 20:43:32.227155   41053 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:43:32.227269   41053 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:43:32.227446   41053 mustload.go:65] Loading cluster: ha-844626
	I0612 20:43:32.227572   41053 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:43:32.227600   41053 stop.go:39] StopHost: ha-844626-m04
	I0612 20:43:32.228048   41053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:43:32.228105   41053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:43:32.243479   41053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46745
	I0612 20:43:32.243949   41053 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:43:32.244460   41053 main.go:141] libmachine: Using API Version  1
	I0612 20:43:32.244490   41053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:43:32.244800   41053 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:43:32.247232   41053 out.go:177] * Stopping node "ha-844626-m04"  ...
	I0612 20:43:32.248663   41053 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0612 20:43:32.248693   41053 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:43:32.248955   41053 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0612 20:43:32.248985   41053 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:43:32.252299   41053 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:43:32.252736   41053 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:42:59 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:43:32.252778   41053 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:43:32.253031   41053 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:43:32.253191   41053 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:43:32.253334   41053 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:43:32.253445   41053 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	I0612 20:43:32.338375   41053 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0612 20:43:32.391989   41053 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0612 20:43:32.446031   41053 main.go:141] libmachine: Stopping "ha-844626-m04"...
	I0612 20:43:32.446056   41053 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:43:32.447724   41053 main.go:141] libmachine: (ha-844626-m04) Calling .Stop
	I0612 20:43:32.451996   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 0/120
	I0612 20:43:33.453461   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 1/120
	I0612 20:43:34.454861   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 2/120
	I0612 20:43:35.457187   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 3/120
	I0612 20:43:36.458712   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 4/120
	I0612 20:43:37.460778   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 5/120
	I0612 20:43:38.462299   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 6/120
	I0612 20:43:39.463884   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 7/120
	I0612 20:43:40.466341   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 8/120
	I0612 20:43:41.467881   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 9/120
	I0612 20:43:42.470131   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 10/120
	I0612 20:43:43.471431   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 11/120
	I0612 20:43:44.473855   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 12/120
	I0612 20:43:45.475527   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 13/120
	I0612 20:43:46.477662   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 14/120
	I0612 20:43:47.478999   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 15/120
	I0612 20:43:48.480323   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 16/120
	I0612 20:43:49.481730   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 17/120
	I0612 20:43:50.484015   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 18/120
	I0612 20:43:51.485880   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 19/120
	I0612 20:43:52.488323   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 20/120
	I0612 20:43:53.489911   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 21/120
	I0612 20:43:54.491542   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 22/120
	I0612 20:43:55.493943   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 23/120
	I0612 20:43:56.495518   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 24/120
	I0612 20:43:57.497734   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 25/120
	I0612 20:43:58.498962   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 26/120
	I0612 20:43:59.500582   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 27/120
	I0612 20:44:00.502727   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 28/120
	I0612 20:44:01.503909   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 29/120
	I0612 20:44:02.506210   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 30/120
	I0612 20:44:03.507681   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 31/120
	I0612 20:44:04.509649   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 32/120
	I0612 20:44:05.510934   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 33/120
	I0612 20:44:06.512247   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 34/120
	I0612 20:44:07.514326   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 35/120
	I0612 20:44:08.515789   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 36/120
	I0612 20:44:09.517268   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 37/120
	I0612 20:44:10.518852   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 38/120
	I0612 20:44:11.520308   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 39/120
	I0612 20:44:12.522412   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 40/120
	I0612 20:44:13.523708   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 41/120
	I0612 20:44:14.525824   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 42/120
	I0612 20:44:15.526993   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 43/120
	I0612 20:44:16.528519   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 44/120
	I0612 20:44:17.530663   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 45/120
	I0612 20:44:18.532061   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 46/120
	I0612 20:44:19.534045   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 47/120
	I0612 20:44:20.535758   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 48/120
	I0612 20:44:21.537182   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 49/120
	I0612 20:44:22.539496   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 50/120
	I0612 20:44:23.540795   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 51/120
	I0612 20:44:24.542274   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 52/120
	I0612 20:44:25.543686   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 53/120
	I0612 20:44:26.545594   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 54/120
	I0612 20:44:27.547718   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 55/120
	I0612 20:44:28.549790   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 56/120
	I0612 20:44:29.551254   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 57/120
	I0612 20:44:30.552588   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 58/120
	I0612 20:44:31.554216   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 59/120
	I0612 20:44:32.556369   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 60/120
	I0612 20:44:33.557704   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 61/120
	I0612 20:44:34.559056   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 62/120
	I0612 20:44:35.561057   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 63/120
	I0612 20:44:36.562282   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 64/120
	I0612 20:44:37.564350   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 65/120
	I0612 20:44:38.566499   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 66/120
	I0612 20:44:39.567882   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 67/120
	I0612 20:44:40.569275   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 68/120
	I0612 20:44:41.570724   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 69/120
	I0612 20:44:42.573061   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 70/120
	I0612 20:44:43.574573   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 71/120
	I0612 20:44:44.575827   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 72/120
	I0612 20:44:45.577098   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 73/120
	I0612 20:44:46.578588   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 74/120
	I0612 20:44:47.580601   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 75/120
	I0612 20:44:48.582168   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 76/120
	I0612 20:44:49.583754   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 77/120
	I0612 20:44:50.585262   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 78/120
	I0612 20:44:51.586744   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 79/120
	I0612 20:44:52.588909   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 80/120
	I0612 20:44:53.590118   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 81/120
	I0612 20:44:54.591529   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 82/120
	I0612 20:44:55.593436   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 83/120
	I0612 20:44:56.595194   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 84/120
	I0612 20:44:57.597347   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 85/120
	I0612 20:44:58.598841   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 86/120
	I0612 20:44:59.600514   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 87/120
	I0612 20:45:00.601981   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 88/120
	I0612 20:45:01.603402   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 89/120
	I0612 20:45:02.605750   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 90/120
	I0612 20:45:03.607202   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 91/120
	I0612 20:45:04.608525   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 92/120
	I0612 20:45:05.610190   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 93/120
	I0612 20:45:06.611728   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 94/120
	I0612 20:45:07.613616   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 95/120
	I0612 20:45:08.615368   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 96/120
	I0612 20:45:09.616712   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 97/120
	I0612 20:45:10.618226   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 98/120
	I0612 20:45:11.619924   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 99/120
	I0612 20:45:12.622116   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 100/120
	I0612 20:45:13.623528   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 101/120
	I0612 20:45:14.625766   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 102/120
	I0612 20:45:15.627276   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 103/120
	I0612 20:45:16.628846   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 104/120
	I0612 20:45:17.630799   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 105/120
	I0612 20:45:18.632228   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 106/120
	I0612 20:45:19.633655   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 107/120
	I0612 20:45:20.634845   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 108/120
	I0612 20:45:21.636966   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 109/120
	I0612 20:45:22.638830   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 110/120
	I0612 20:45:23.640404   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 111/120
	I0612 20:45:24.641774   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 112/120
	I0612 20:45:25.643740   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 113/120
	I0612 20:45:26.645692   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 114/120
	I0612 20:45:27.647649   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 115/120
	I0612 20:45:28.649687   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 116/120
	I0612 20:45:29.651206   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 117/120
	I0612 20:45:30.652621   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 118/120
	I0612 20:45:31.654118   41053 main.go:141] libmachine: (ha-844626-m04) Waiting for machine to stop 119/120
	I0612 20:45:32.655281   41053 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0612 20:45:32.655334   41053 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0612 20:45:32.657368   41053 out.go:177] 
	W0612 20:45:32.658817   41053 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0612 20:45:32.658841   41053 out.go:239] * 
	* 
	W0612 20:45:32.661162   41053 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 20:45:32.662397   41053 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-844626 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr: exit status 3 (18.999034527s)

                                                
                                                
-- stdout --
	ha-844626
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-844626-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:45:32.705472   41468 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:45:32.705566   41468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:45:32.705573   41468 out.go:304] Setting ErrFile to fd 2...
	I0612 20:45:32.705577   41468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:45:32.705778   41468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:45:32.705925   41468 out.go:298] Setting JSON to false
	I0612 20:45:32.705944   41468 mustload.go:65] Loading cluster: ha-844626
	I0612 20:45:32.705998   41468 notify.go:220] Checking for updates...
	I0612 20:45:32.706662   41468 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:45:32.706734   41468 status.go:255] checking status of ha-844626 ...
	I0612 20:45:32.707833   41468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:45:32.707908   41468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:45:32.723691   41468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34439
	I0612 20:45:32.724051   41468 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:45:32.724502   41468 main.go:141] libmachine: Using API Version  1
	I0612 20:45:32.724524   41468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:45:32.724912   41468 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:45:32.725160   41468 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:45:32.726921   41468 status.go:330] ha-844626 host status = "Running" (err=<nil>)
	I0612 20:45:32.726947   41468 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:45:32.727404   41468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:45:32.727452   41468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:45:32.741379   41468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0612 20:45:32.741781   41468 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:45:32.742216   41468 main.go:141] libmachine: Using API Version  1
	I0612 20:45:32.742240   41468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:45:32.742516   41468 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:45:32.742690   41468 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:45:32.745594   41468 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:45:32.746014   41468 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:45:32.746047   41468 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:45:32.746153   41468 host.go:66] Checking if "ha-844626" exists ...
	I0612 20:45:32.746577   41468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:45:32.746634   41468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:45:32.760575   41468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0612 20:45:32.760948   41468 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:45:32.761427   41468 main.go:141] libmachine: Using API Version  1
	I0612 20:45:32.761452   41468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:45:32.761802   41468 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:45:32.762075   41468 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:45:32.762267   41468 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:45:32.762308   41468 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:45:32.765062   41468 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:45:32.765478   41468 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:45:32.765504   41468 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:45:32.765613   41468 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:45:32.765764   41468 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:45:32.765904   41468 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:45:32.766086   41468 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:45:32.849672   41468 ssh_runner.go:195] Run: systemctl --version
	I0612 20:45:32.857126   41468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:45:32.876456   41468 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:45:32.876488   41468 api_server.go:166] Checking apiserver status ...
	I0612 20:45:32.876520   41468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:45:32.894592   41468 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5225/cgroup
	W0612 20:45:32.905257   41468 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:45:32.905317   41468 ssh_runner.go:195] Run: ls
	I0612 20:45:32.909908   41468 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:45:32.916091   41468 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:45:32.916112   41468 status.go:422] ha-844626 apiserver status = Running (err=<nil>)
	I0612 20:45:32.916120   41468 status.go:257] ha-844626 status: &{Name:ha-844626 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:45:32.916136   41468 status.go:255] checking status of ha-844626-m02 ...
	I0612 20:45:32.916439   41468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:45:32.916479   41468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:45:32.930955   41468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0612 20:45:32.931401   41468 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:45:32.931927   41468 main.go:141] libmachine: Using API Version  1
	I0612 20:45:32.931951   41468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:45:32.932268   41468 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:45:32.932495   41468 main.go:141] libmachine: (ha-844626-m02) Calling .GetState
	I0612 20:45:32.933944   41468 status.go:330] ha-844626-m02 host status = "Running" (err=<nil>)
	I0612 20:45:32.933960   41468 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:45:32.934388   41468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:45:32.934424   41468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:45:32.948573   41468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I0612 20:45:32.948939   41468 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:45:32.949377   41468 main.go:141] libmachine: Using API Version  1
	I0612 20:45:32.949395   41468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:45:32.949696   41468 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:45:32.949882   41468 main.go:141] libmachine: (ha-844626-m02) Calling .GetIP
	I0612 20:45:32.952444   41468 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:45:32.952856   41468 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:40:23 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:45:32.952874   41468 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:45:32.953048   41468 host.go:66] Checking if "ha-844626-m02" exists ...
	I0612 20:45:32.953429   41468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:45:32.953474   41468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:45:32.967166   41468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0612 20:45:32.967651   41468 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:45:32.968084   41468 main.go:141] libmachine: Using API Version  1
	I0612 20:45:32.968105   41468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:45:32.968447   41468 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:45:32.968700   41468 main.go:141] libmachine: (ha-844626-m02) Calling .DriverName
	I0612 20:45:32.968873   41468 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:45:32.968896   41468 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHHostname
	I0612 20:45:32.971735   41468 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:45:32.972094   41468 main.go:141] libmachine: (ha-844626-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:79:34", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:40:23 +0000 UTC Type:0 Mac:52:54:00:01:79:34 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-844626-m02 Clientid:01:52:54:00:01:79:34}
	I0612 20:45:32.972113   41468 main.go:141] libmachine: (ha-844626-m02) DBG | domain ha-844626-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:01:79:34 in network mk-ha-844626
	I0612 20:45:32.972287   41468 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHPort
	I0612 20:45:32.972482   41468 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHKeyPath
	I0612 20:45:32.972641   41468 main.go:141] libmachine: (ha-844626-m02) Calling .GetSSHUsername
	I0612 20:45:32.972806   41468 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m02/id_rsa Username:docker}
	I0612 20:45:33.061107   41468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 20:45:33.082759   41468 kubeconfig.go:125] found "ha-844626" server: "https://192.168.39.254:8443"
	I0612 20:45:33.082788   41468 api_server.go:166] Checking apiserver status ...
	I0612 20:45:33.082819   41468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 20:45:33.103215   41468 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1679/cgroup
	W0612 20:45:33.113687   41468 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1679/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 20:45:33.113769   41468 ssh_runner.go:195] Run: ls
	I0612 20:45:33.119423   41468 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0612 20:45:33.123849   41468 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0612 20:45:33.123874   41468 status.go:422] ha-844626-m02 apiserver status = Running (err=<nil>)
	I0612 20:45:33.123885   41468 status.go:257] ha-844626-m02 status: &{Name:ha-844626-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 20:45:33.123918   41468 status.go:255] checking status of ha-844626-m04 ...
	I0612 20:45:33.124237   41468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:45:33.124276   41468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:45:33.139838   41468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33009
	I0612 20:45:33.140212   41468 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:45:33.140691   41468 main.go:141] libmachine: Using API Version  1
	I0612 20:45:33.140718   41468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:45:33.141020   41468 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:45:33.141204   41468 main.go:141] libmachine: (ha-844626-m04) Calling .GetState
	I0612 20:45:33.142814   41468 status.go:330] ha-844626-m04 host status = "Running" (err=<nil>)
	I0612 20:45:33.142829   41468 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:45:33.143084   41468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:45:33.143121   41468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:45:33.157372   41468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0612 20:45:33.157699   41468 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:45:33.158169   41468 main.go:141] libmachine: Using API Version  1
	I0612 20:45:33.158189   41468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:45:33.158492   41468 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:45:33.158659   41468 main.go:141] libmachine: (ha-844626-m04) Calling .GetIP
	I0612 20:45:33.161100   41468 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:45:33.161520   41468 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:42:59 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:45:33.161541   41468 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:45:33.161681   41468 host.go:66] Checking if "ha-844626-m04" exists ...
	I0612 20:45:33.162069   41468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:45:33.162119   41468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:45:33.177787   41468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I0612 20:45:33.178141   41468 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:45:33.178554   41468 main.go:141] libmachine: Using API Version  1
	I0612 20:45:33.178573   41468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:45:33.178853   41468 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:45:33.179040   41468 main.go:141] libmachine: (ha-844626-m04) Calling .DriverName
	I0612 20:45:33.179281   41468 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 20:45:33.179304   41468 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHHostname
	I0612 20:45:33.181964   41468 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:45:33.182390   41468 main.go:141] libmachine: (ha-844626-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:04:18", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:42:59 +0000 UTC Type:0 Mac:52:54:00:46:04:18 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-844626-m04 Clientid:01:52:54:00:46:04:18}
	I0612 20:45:33.182429   41468 main.go:141] libmachine: (ha-844626-m04) DBG | domain ha-844626-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:46:04:18 in network mk-ha-844626
	I0612 20:45:33.182509   41468 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHPort
	I0612 20:45:33.182669   41468 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHKeyPath
	I0612 20:45:33.182803   41468 main.go:141] libmachine: (ha-844626-m04) Calling .GetSSHUsername
	I0612 20:45:33.182946   41468 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626-m04/id_rsa Username:docker}
	W0612 20:45:51.663378   41468 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.98:22: connect: no route to host
	W0612 20:45:51.663472   41468 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.98:22: connect: no route to host
	E0612 20:45:51.663488   41468 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.98:22: connect: no route to host
	I0612 20:45:51.663495   41468 status.go:257] ha-844626-m04 status: &{Name:ha-844626-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0612 20:45:51.663518   41468 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.98:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-844626 -n ha-844626
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-844626 logs -n 25: (1.706673024s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-844626 ssh -n ha-844626-m02 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m03_ha-844626-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04:/home/docker/cp-test_ha-844626-m03_ha-844626-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m04 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m03_ha-844626-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp testdata/cp-test.txt                                              | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile43944605/001/cp-test_ha-844626-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626:/home/docker/cp-test_ha-844626-m04_ha-844626.txt                     |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626 sudo cat                                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626.txt                               |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m02:/home/docker/cp-test_ha-844626-m04_ha-844626-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m02 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m03:/home/docker/cp-test_ha-844626-m04_ha-844626-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n                                                               | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | ha-844626-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-844626 ssh -n ha-844626-m03 sudo cat                                        | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC | 12 Jun 24 20:33 UTC |
	|         | /home/docker/cp-test_ha-844626-m04_ha-844626-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-844626 node stop m02 -v=7                                                   | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:33 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-844626 node start m02 -v=7                                                  | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:35 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-844626 -v=7                                                         | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:36 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-844626 -v=7                                                              | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:36 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-844626 --wait=true -v=7                                                  | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:38 UTC | 12 Jun 24 20:43 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-844626                                                              | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:43 UTC |                     |
	| node    | ha-844626 node delete m03 -v=7                                                 | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:43 UTC | 12 Jun 24 20:43 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | ha-844626 stop -v=7                                                            | ha-844626 | jenkins | v1.33.1 | 12 Jun 24 20:43 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 20:38:34
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 20:38:34.712104   39149 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:38:34.712344   39149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:38:34.712353   39149 out.go:304] Setting ErrFile to fd 2...
	I0612 20:38:34.712357   39149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:38:34.712524   39149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:38:34.713045   39149 out.go:298] Setting JSON to false
	I0612 20:38:34.713924   39149 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4860,"bootTime":1718219855,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 20:38:34.713977   39149 start.go:139] virtualization: kvm guest
	I0612 20:38:34.716453   39149 out.go:177] * [ha-844626] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 20:38:34.717841   39149 notify.go:220] Checking for updates...
	I0612 20:38:34.717859   39149 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 20:38:34.719200   39149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 20:38:34.720778   39149 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:38:34.722230   39149 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:38:34.723834   39149 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 20:38:34.725279   39149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 20:38:34.727156   39149 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:38:34.727311   39149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 20:38:34.727708   39149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:38:34.727780   39149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:38:34.743421   39149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I0612 20:38:34.743848   39149 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:38:34.744442   39149 main.go:141] libmachine: Using API Version  1
	I0612 20:38:34.744461   39149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:38:34.744891   39149 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:38:34.745069   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:38:34.779647   39149 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 20:38:34.781007   39149 start.go:297] selected driver: kvm2
	I0612 20:38:34.781022   39149 start.go:901] validating driver "kvm2" against &{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:38:34.781195   39149 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 20:38:34.781556   39149 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:38:34.781663   39149 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 20:38:34.797759   39149 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 20:38:34.798429   39149 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 20:38:34.798512   39149 cni.go:84] Creating CNI manager for ""
	I0612 20:38:34.798527   39149 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0612 20:38:34.798584   39149 start.go:340] cluster config:
	{Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:38:34.798719   39149 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:38:34.801453   39149 out.go:177] * Starting "ha-844626" primary control-plane node in "ha-844626" cluster
	I0612 20:38:34.802928   39149 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:38:34.802969   39149 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 20:38:34.802982   39149 cache.go:56] Caching tarball of preloaded images
	I0612 20:38:34.803059   39149 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 20:38:34.803071   39149 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 20:38:34.803229   39149 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/config.json ...
	I0612 20:38:34.803444   39149 start.go:360] acquireMachinesLock for ha-844626: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 20:38:34.803501   39149 start.go:364] duration metric: took 38.081µs to acquireMachinesLock for "ha-844626"
	I0612 20:38:34.803521   39149 start.go:96] Skipping create...Using existing machine configuration
	I0612 20:38:34.803529   39149 fix.go:54] fixHost starting: 
	I0612 20:38:34.803782   39149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:38:34.803823   39149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:38:34.818620   39149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0612 20:38:34.819029   39149 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:38:34.819536   39149 main.go:141] libmachine: Using API Version  1
	I0612 20:38:34.819563   39149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:38:34.819898   39149 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:38:34.820069   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:38:34.820366   39149 main.go:141] libmachine: (ha-844626) Calling .GetState
	I0612 20:38:34.821942   39149 fix.go:112] recreateIfNeeded on ha-844626: state=Running err=<nil>
	W0612 20:38:34.821968   39149 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 20:38:34.823921   39149 out.go:177] * Updating the running kvm2 "ha-844626" VM ...
	I0612 20:38:34.825230   39149 machine.go:94] provisionDockerMachine start ...
	I0612 20:38:34.825260   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:38:34.825475   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:34.828139   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:34.828643   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:34.828672   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:34.828809   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:38:34.829000   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:34.829176   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:34.829330   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:38:34.829503   39149 main.go:141] libmachine: Using SSH client type: native
	I0612 20:38:34.829769   39149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:38:34.829793   39149 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 20:38:34.941467   39149 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844626
	
	I0612 20:38:34.941494   39149 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:38:34.941748   39149 buildroot.go:166] provisioning hostname "ha-844626"
	I0612 20:38:34.941769   39149 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:38:34.941970   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:34.944831   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:34.945339   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:34.945370   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:34.945481   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:38:34.945664   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:34.945894   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:34.946067   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:38:34.946292   39149 main.go:141] libmachine: Using SSH client type: native
	I0612 20:38:34.946466   39149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:38:34.946479   39149 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-844626 && echo "ha-844626" | sudo tee /etc/hostname
	I0612 20:38:35.067005   39149 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-844626
	
	I0612 20:38:35.067047   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:35.069794   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.070169   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:35.070198   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.070408   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:38:35.070592   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:35.070731   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:35.070866   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:38:35.070992   39149 main.go:141] libmachine: Using SSH client type: native
	I0612 20:38:35.071153   39149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:38:35.071181   39149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-844626' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-844626/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-844626' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 20:38:35.176503   39149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 20:38:35.176534   39149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 20:38:35.176568   39149 buildroot.go:174] setting up certificates
	I0612 20:38:35.176576   39149 provision.go:84] configureAuth start
	I0612 20:38:35.176589   39149 main.go:141] libmachine: (ha-844626) Calling .GetMachineName
	I0612 20:38:35.176858   39149 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:38:35.179417   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.179766   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:35.179812   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.179930   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:35.182214   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.182601   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:35.182629   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.182743   39149 provision.go:143] copyHostCerts
	I0612 20:38:35.182781   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:38:35.182831   39149 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 20:38:35.182842   39149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 20:38:35.182918   39149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 20:38:35.183014   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:38:35.183034   39149 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 20:38:35.183040   39149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 20:38:35.183083   39149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 20:38:35.183141   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:38:35.183165   39149 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 20:38:35.183184   39149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 20:38:35.183217   39149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 20:38:35.183285   39149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.ha-844626 san=[127.0.0.1 192.168.39.196 ha-844626 localhost minikube]
	I0612 20:38:35.387144   39149 provision.go:177] copyRemoteCerts
	I0612 20:38:35.387229   39149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 20:38:35.387259   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:35.390019   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.390350   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:35.390379   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.390540   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:38:35.390754   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:35.390917   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:38:35.391065   39149 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:38:35.474126   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0612 20:38:35.474210   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 20:38:35.500491   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0612 20:38:35.500556   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0612 20:38:35.526624   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0612 20:38:35.526685   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 20:38:35.551890   39149 provision.go:87] duration metric: took 375.30296ms to configureAuth
	I0612 20:38:35.551915   39149 buildroot.go:189] setting minikube options for container-runtime
	I0612 20:38:35.552138   39149 config.go:182] Loaded profile config "ha-844626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:38:35.552218   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:38:35.555096   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.555541   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:38:35.555567   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:38:35.555820   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:38:35.556035   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:35.556273   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:38:35.556468   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:38:35.556672   39149 main.go:141] libmachine: Using SSH client type: native
	I0612 20:38:35.556878   39149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:38:35.556913   39149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 20:40:06.412994   39149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 20:40:06.413021   39149 machine.go:97] duration metric: took 1m31.587775076s to provisionDockerMachine
	I0612 20:40:06.413037   39149 start.go:293] postStartSetup for "ha-844626" (driver="kvm2")
	I0612 20:40:06.413051   39149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 20:40:06.413070   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.413389   39149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 20:40:06.413419   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:40:06.416258   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.416626   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.416651   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.416811   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:40:06.417002   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.417177   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:40:06.417315   39149 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:40:06.500013   39149 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 20:40:06.504268   39149 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 20:40:06.504291   39149 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 20:40:06.504368   39149 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 20:40:06.504456   39149 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 20:40:06.504467   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /etc/ssl/certs/214442.pem
	I0612 20:40:06.504563   39149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 20:40:06.515196   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:40:06.538492   39149 start.go:296] duration metric: took 125.440977ms for postStartSetup
	I0612 20:40:06.538536   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.538824   39149 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0612 20:40:06.538847   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:40:06.541351   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.541699   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.541719   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.541885   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:40:06.542075   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.542232   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:40:06.542347   39149 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	W0612 20:40:06.622031   39149 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0612 20:40:06.622052   39149 fix.go:56] duration metric: took 1m31.818525074s for fixHost
	I0612 20:40:06.622073   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:40:06.624588   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.625027   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.625085   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.625204   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:40:06.625396   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.625593   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.625740   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:40:06.625902   39149 main.go:141] libmachine: Using SSH client type: native
	I0612 20:40:06.626052   39149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0612 20:40:06.626061   39149 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 20:40:06.728179   39149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224806.687531994
	
	I0612 20:40:06.728236   39149 fix.go:216] guest clock: 1718224806.687531994
	I0612 20:40:06.728263   39149 fix.go:229] Guest: 2024-06-12 20:40:06.687531994 +0000 UTC Remote: 2024-06-12 20:40:06.622059013 +0000 UTC m=+91.943977263 (delta=65.472981ms)
	I0612 20:40:06.728301   39149 fix.go:200] guest clock delta is within tolerance: 65.472981ms
	I0612 20:40:06.728309   39149 start.go:83] releasing machines lock for "ha-844626", held for 1m31.924796123s
	I0612 20:40:06.728340   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.728629   39149 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:40:06.731166   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.731572   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.731600   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.731723   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.732245   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.732425   39149 main.go:141] libmachine: (ha-844626) Calling .DriverName
	I0612 20:40:06.732497   39149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 20:40:06.732537   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:40:06.732577   39149 ssh_runner.go:195] Run: cat /version.json
	I0612 20:40:06.732594   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHHostname
	I0612 20:40:06.735020   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.735211   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.735463   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.735483   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.735622   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:40:06.735656   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:06.735678   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:06.735833   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHPort
	I0612 20:40:06.735842   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.736005   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHKeyPath
	I0612 20:40:06.736006   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:40:06.736184   39149 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:40:06.736198   39149 main.go:141] libmachine: (ha-844626) Calling .GetSSHUsername
	I0612 20:40:06.736321   39149 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/ha-844626/id_rsa Username:docker}
	I0612 20:40:06.812762   39149 ssh_runner.go:195] Run: systemctl --version
	I0612 20:40:06.838816   39149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 20:40:07.005557   39149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 20:40:07.014768   39149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 20:40:07.014844   39149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 20:40:07.024257   39149 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0612 20:40:07.024274   39149 start.go:494] detecting cgroup driver to use...
	I0612 20:40:07.024341   39149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 20:40:07.040276   39149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 20:40:07.054168   39149 docker.go:217] disabling cri-docker service (if available) ...
	I0612 20:40:07.054217   39149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 20:40:07.068300   39149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 20:40:07.082569   39149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 20:40:07.234949   39149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 20:40:07.384601   39149 docker.go:233] disabling docker service ...
	I0612 20:40:07.384665   39149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 20:40:07.402285   39149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 20:40:07.415788   39149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 20:40:07.558521   39149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 20:40:07.707378   39149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 20:40:07.721917   39149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 20:40:07.741663   39149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 20:40:07.741728   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.752331   39149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 20:40:07.752400   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.762707   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.773279   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.784122   39149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 20:40:07.795203   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.806138   39149 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.817835   39149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 20:40:07.828272   39149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 20:40:07.838058   39149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 20:40:07.847424   39149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:40:07.996459   39149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 20:40:10.990562   39149 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.994066026s)
	I0612 20:40:10.990593   39149 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 20:40:10.990635   39149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 20:40:10.995880   39149 start.go:562] Will wait 60s for crictl version
	I0612 20:40:10.995923   39149 ssh_runner.go:195] Run: which crictl
	I0612 20:40:10.999815   39149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 20:40:11.041550   39149 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 20:40:11.041627   39149 ssh_runner.go:195] Run: crio --version
	I0612 20:40:11.070257   39149 ssh_runner.go:195] Run: crio --version
	I0612 20:40:11.101721   39149 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 20:40:11.103275   39149 main.go:141] libmachine: (ha-844626) Calling .GetIP
	I0612 20:40:11.105818   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:11.106174   39149 main.go:141] libmachine: (ha-844626) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:2d:9f", ip: ""} in network mk-ha-844626: {Iface:virbr1 ExpiryTime:2024-06-12 21:27:55 +0000 UTC Type:0 Mac:52:54:00:8a:2d:9f Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-844626 Clientid:01:52:54:00:8a:2d:9f}
	I0612 20:40:11.106202   39149 main.go:141] libmachine: (ha-844626) DBG | domain ha-844626 has defined IP address 192.168.39.196 and MAC address 52:54:00:8a:2d:9f in network mk-ha-844626
	I0612 20:40:11.106439   39149 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 20:40:11.111490   39149 kubeadm.go:877] updating cluster {Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 20:40:11.111618   39149 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:40:11.111660   39149 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:40:11.157660   39149 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 20:40:11.157682   39149 crio.go:433] Images already preloaded, skipping extraction
	I0612 20:40:11.157732   39149 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 20:40:11.196348   39149 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 20:40:11.196375   39149 cache_images.go:84] Images are preloaded, skipping loading
	I0612 20:40:11.196387   39149 kubeadm.go:928] updating node { 192.168.39.196 8443 v1.30.1 crio true true} ...
	I0612 20:40:11.196490   39149 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-844626 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 20:40:11.196551   39149 ssh_runner.go:195] Run: crio config
	I0612 20:40:11.245223   39149 cni.go:84] Creating CNI manager for ""
	I0612 20:40:11.245239   39149 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0612 20:40:11.245248   39149 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 20:40:11.245279   39149 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-844626 NodeName:ha-844626 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 20:40:11.245442   39149 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-844626"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 20:40:11.245467   39149 kube-vip.go:115] generating kube-vip config ...
	I0612 20:40:11.245514   39149 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 20:40:11.257847   39149 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 20:40:11.257946   39149 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0612 20:40:11.258008   39149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 20:40:11.268067   39149 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 20:40:11.268138   39149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0612 20:40:11.277887   39149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0612 20:40:11.295688   39149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 20:40:11.312882   39149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0612 20:40:11.329895   39149 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0612 20:40:11.348600   39149 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0612 20:40:11.359564   39149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 20:40:11.508615   39149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 20:40:11.524444   39149 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626 for IP: 192.168.39.196
	I0612 20:40:11.524466   39149 certs.go:194] generating shared ca certs ...
	I0612 20:40:11.524482   39149 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:40:11.524636   39149 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 20:40:11.524686   39149 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 20:40:11.524700   39149 certs.go:256] generating profile certs ...
	I0612 20:40:11.524803   39149 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/client.key
	I0612 20:40:11.524837   39149 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.dc56d1b6
	I0612 20:40:11.524857   39149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.dc56d1b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.108 192.168.39.76 192.168.39.254]
	I0612 20:40:12.014863   39149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.dc56d1b6 ...
	I0612 20:40:12.014898   39149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.dc56d1b6: {Name:mkea74692ba818d459bfe24cc809837ba8cc37aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:40:12.015115   39149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.dc56d1b6 ...
	I0612 20:40:12.015133   39149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.dc56d1b6: {Name:mkfa2aef60fd21dd1b6b30767207e755ac62c104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:40:12.015254   39149 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt.dc56d1b6 -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt
	I0612 20:40:12.015464   39149 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key.dc56d1b6 -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key
	I0612 20:40:12.015658   39149 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key
	I0612 20:40:12.015678   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 20:40:12.015709   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0612 20:40:12.015732   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 20:40:12.015756   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 20:40:12.015775   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 20:40:12.015796   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 20:40:12.015818   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 20:40:12.015840   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 20:40:12.015907   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 20:40:12.015951   39149 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 20:40:12.015966   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 20:40:12.016014   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 20:40:12.016053   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 20:40:12.016088   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 20:40:12.016150   39149 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 20:40:12.016194   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /usr/share/ca-certificates/214442.pem
	I0612 20:40:12.016216   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:40:12.016236   39149 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem -> /usr/share/ca-certificates/21444.pem
	I0612 20:40:12.016824   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 20:40:12.042998   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 20:40:12.066657   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 20:40:12.090297   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 20:40:12.114771   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0612 20:40:12.139044   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 20:40:12.163809   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 20:40:12.188358   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/ha-844626/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 20:40:12.213228   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 20:40:12.238051   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 20:40:12.262216   39149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 20:40:12.285543   39149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 20:40:12.302790   39149 ssh_runner.go:195] Run: openssl version
	I0612 20:40:12.309200   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 20:40:12.319681   39149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 20:40:12.324276   39149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 20:40:12.324333   39149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 20:40:12.330069   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 20:40:12.340476   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 20:40:12.351384   39149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:40:12.355908   39149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:40:12.355969   39149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 20:40:12.361544   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 20:40:12.370454   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 20:40:12.380966   39149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 20:40:12.385518   39149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 20:40:12.385579   39149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 20:40:12.391440   39149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 20:40:12.400649   39149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 20:40:12.405772   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 20:40:12.411608   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 20:40:12.417495   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 20:40:12.423165   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 20:40:12.428860   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 20:40:12.435374   39149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 20:40:12.441545   39149 kubeadm.go:391] StartCluster: {Name:ha-844626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-844626 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.98 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:40:12.441681   39149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 20:40:12.441727   39149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 20:40:12.515928   39149 cri.go:89] found id: "3c021ec12933d9321a7393dfad4f45b7d05ffc04c4c8954c28e02082e86c1306"
	I0612 20:40:12.515947   39149 cri.go:89] found id: "944c3d1c25165f196a8d630dc945dc1a4162fb8a11f750259dd23974392b5a8c"
	I0612 20:40:12.515951   39149 cri.go:89] found id: "09c8070fe3b658046a9a19733813849b24fa6b99ac5080e9c92e4865b4b3cdc3"
	I0612 20:40:12.515954   39149 cri.go:89] found id: "ed87fc57398ca349ce32bc4fcea61bb7ede6451b9fe8db63349ef7ee6151bd50"
	I0612 20:40:12.515957   39149 cri.go:89] found id: "5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0"
	I0612 20:40:12.515962   39149 cri.go:89] found id: "6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff"
	I0612 20:40:12.515965   39149 cri.go:89] found id: "63a8f38c6abf70e91806516f6efb3aec847188dad6c91439ca9660d95029a3e6"
	I0612 20:40:12.515967   39149 cri.go:89] found id: "b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c"
	I0612 20:40:12.515970   39149 cri.go:89] found id: "cd52024c12a2b486d52b8f6803360b3172fb54227b17758bbd09a2e22dc32163"
	I0612 20:40:12.515974   39149 cri.go:89] found id: "6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d"
	I0612 20:40:12.515977   39149 cri.go:89] found id: "223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92"
	I0612 20:40:12.515979   39149 cri.go:89] found id: "41bc9389144d30c98a68d86d2f724492e05278d6c650700937bb9e9dca93881a"
	I0612 20:40:12.515981   39149 cri.go:89] found id: "1ac304305cc393d3678df3414155a5e9ca1fb5abecbd1ecb70c20c1c4f562bbf"
	I0612 20:40:12.515984   39149 cri.go:89] found id: ""
	I0612 20:40:12.516032   39149 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.238454634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718225152238431771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32cb639c-3bef-4bc6-b73f-8a0e88ce7d5e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.239128032Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf794a37-708f-48f6-a7fc-689fdcb9914b name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.239187946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf794a37-708f-48f6-a7fc-689fdcb9914b name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.239661586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7032c9d518b83b22af1468d51f671cd78fe893958d313f9a62c6310e07e5eb6c,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718224899813940041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f432160e35b26c7b012ec4edfd7d00508fb15c4cc8f9547df1507fa19a6dabee,PodSandboxId:2791c645324815b106b820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718224879817848716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe7677b75490eb9a887b3192e914a38bbc5fd772111c9a731fd0c67b961eea,PodSandboxId:7ad55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224860802741608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c4a2d567239f2cf47396cba150c895012356b8ff9c055eafd3490a6316c791,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224856808159694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705c2561cc55952a4ac898f54cc34444e53d2f4bdfa63cf7bd8c2ebb56472f73,PodSandboxId:641e7ec9022152f82e52e566a21ce495ad6fccbd26b6cd0a919ea39bd3bc1dea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224851037770649,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd3375cfa65cdf6427956610a22c5ad458ab15dcb4c60281d661e3b46f921ce,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718224849810548318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8445efcf36094d2712e6d7eeebc0e6b73520b6f1f530e37bbf40c8108e6e326e,PodSandboxId:b9d9b289b932c027eadfd224d1f9763c600e3cd5b391176fe10b1d15c75c0302,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718224832658819585,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578a6adb37c07fb3ddb14c1b9f4fcd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:847d6ff92e8e601118971db1953ddd8cd8fd05b8a16cb89aef9e6bf5c67a8426,PodSandboxId:125f3e7aa763c8c93918780c5657199e412c7d2ff7c89b4c9599b1b8c13ab2fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718224817714758801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ff254
1bb480d1e29fe0cdcb21ac962bbb63edc50c303d905d5df9c801bb3f,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718224817578288212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4350e829162646be417b019b2cb971
ff3a4548b2e65be4e5d7cc77a69a513de1,PodSandboxId:7156712f8ff2d4b1d06493d07a671bf6c4cf93c4fa5f096208275e7832fc39de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224817463520758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d98b9e0f5051ab363ca02821c8f8d231f5298a04d44f3f40a1ac8a145a70e570,PodSandboxId:7ad
55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718224817496833882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82299b1981f41471feb0a36cd022834e98c7a620a668655d739be255454304da,PodSandboxI
d:a2cb079d37a3df3a47fa418b51318b536fcacbe99a2d5d5e64178be7ae8c9e95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224817415619422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf561296e89021cfeb3942a411fdb1a39d363d089d6c0e3abc9f21a0ed0a02b,PodSandboxId:2791c645324815b106b
820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718224812970582901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209b6c2b28de4f9be36a8b96a42fd0658f8741138b54758c0a4036332c38a03b,PodSandboxId:40c46a3d0827b647af9e44003959e84272fa458e2637139dc12e33
0df8ecc125,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812829188315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:177da54ccde0b134f353821e30d94d485a45f9d5c67619d03d4ff3935aed495d,PodSandboxId:1c7b0383df5e6c2039396c35f89b50155ef1ff7d02214ba0dd246af1bfc68f23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812774314531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718224321149910871,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kuberne
tes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119278718424,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119207439239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718224113786859746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718224093469660512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718224093415553992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf794a37-708f-48f6-a7fc-689fdcb9914b name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.282329122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34838053-51f0-4a46-b24d-24b49076ae06 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.282415226Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34838053-51f0-4a46-b24d-24b49076ae06 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.283893088Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3fc0058-df06-4fc7-b144-7271187c117c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.284417842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718225152284392558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3fc0058-df06-4fc7-b144-7271187c117c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.284974398Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47919516-14f1-4434-aec7-5d5365a012ff name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.285032240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47919516-14f1-4434-aec7-5d5365a012ff name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.285472169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7032c9d518b83b22af1468d51f671cd78fe893958d313f9a62c6310e07e5eb6c,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718224899813940041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f432160e35b26c7b012ec4edfd7d00508fb15c4cc8f9547df1507fa19a6dabee,PodSandboxId:2791c645324815b106b820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718224879817848716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe7677b75490eb9a887b3192e914a38bbc5fd772111c9a731fd0c67b961eea,PodSandboxId:7ad55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224860802741608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c4a2d567239f2cf47396cba150c895012356b8ff9c055eafd3490a6316c791,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224856808159694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705c2561cc55952a4ac898f54cc34444e53d2f4bdfa63cf7bd8c2ebb56472f73,PodSandboxId:641e7ec9022152f82e52e566a21ce495ad6fccbd26b6cd0a919ea39bd3bc1dea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224851037770649,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd3375cfa65cdf6427956610a22c5ad458ab15dcb4c60281d661e3b46f921ce,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718224849810548318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8445efcf36094d2712e6d7eeebc0e6b73520b6f1f530e37bbf40c8108e6e326e,PodSandboxId:b9d9b289b932c027eadfd224d1f9763c600e3cd5b391176fe10b1d15c75c0302,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718224832658819585,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578a6adb37c07fb3ddb14c1b9f4fcd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:847d6ff92e8e601118971db1953ddd8cd8fd05b8a16cb89aef9e6bf5c67a8426,PodSandboxId:125f3e7aa763c8c93918780c5657199e412c7d2ff7c89b4c9599b1b8c13ab2fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718224817714758801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ff254
1bb480d1e29fe0cdcb21ac962bbb63edc50c303d905d5df9c801bb3f,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718224817578288212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4350e829162646be417b019b2cb971
ff3a4548b2e65be4e5d7cc77a69a513de1,PodSandboxId:7156712f8ff2d4b1d06493d07a671bf6c4cf93c4fa5f096208275e7832fc39de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224817463520758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d98b9e0f5051ab363ca02821c8f8d231f5298a04d44f3f40a1ac8a145a70e570,PodSandboxId:7ad
55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718224817496833882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82299b1981f41471feb0a36cd022834e98c7a620a668655d739be255454304da,PodSandboxI
d:a2cb079d37a3df3a47fa418b51318b536fcacbe99a2d5d5e64178be7ae8c9e95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224817415619422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf561296e89021cfeb3942a411fdb1a39d363d089d6c0e3abc9f21a0ed0a02b,PodSandboxId:2791c645324815b106b
820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718224812970582901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209b6c2b28de4f9be36a8b96a42fd0658f8741138b54758c0a4036332c38a03b,PodSandboxId:40c46a3d0827b647af9e44003959e84272fa458e2637139dc12e33
0df8ecc125,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812829188315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:177da54ccde0b134f353821e30d94d485a45f9d5c67619d03d4ff3935aed495d,PodSandboxId:1c7b0383df5e6c2039396c35f89b50155ef1ff7d02214ba0dd246af1bfc68f23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812774314531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718224321149910871,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kuberne
tes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119278718424,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119207439239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718224113786859746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718224093469660512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718224093415553992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47919516-14f1-4434-aec7-5d5365a012ff name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.341752435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=436ac330-5d85-42b6-9693-bcb305c38a23 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.341831112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=436ac330-5d85-42b6-9693-bcb305c38a23 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.342919158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a01d175-acd8-4dd6-be87-be665a522661 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.343637696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718225152343612845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a01d175-acd8-4dd6-be87-be665a522661 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.344097422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a9f2b40-6840-45fd-93d7-ea87cc3a1f93 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.344156719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a9f2b40-6840-45fd-93d7-ea87cc3a1f93 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.350025081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7032c9d518b83b22af1468d51f671cd78fe893958d313f9a62c6310e07e5eb6c,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718224899813940041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f432160e35b26c7b012ec4edfd7d00508fb15c4cc8f9547df1507fa19a6dabee,PodSandboxId:2791c645324815b106b820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718224879817848716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe7677b75490eb9a887b3192e914a38bbc5fd772111c9a731fd0c67b961eea,PodSandboxId:7ad55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224860802741608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c4a2d567239f2cf47396cba150c895012356b8ff9c055eafd3490a6316c791,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224856808159694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705c2561cc55952a4ac898f54cc34444e53d2f4bdfa63cf7bd8c2ebb56472f73,PodSandboxId:641e7ec9022152f82e52e566a21ce495ad6fccbd26b6cd0a919ea39bd3bc1dea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224851037770649,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd3375cfa65cdf6427956610a22c5ad458ab15dcb4c60281d661e3b46f921ce,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718224849810548318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8445efcf36094d2712e6d7eeebc0e6b73520b6f1f530e37bbf40c8108e6e326e,PodSandboxId:b9d9b289b932c027eadfd224d1f9763c600e3cd5b391176fe10b1d15c75c0302,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718224832658819585,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578a6adb37c07fb3ddb14c1b9f4fcd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:847d6ff92e8e601118971db1953ddd8cd8fd05b8a16cb89aef9e6bf5c67a8426,PodSandboxId:125f3e7aa763c8c93918780c5657199e412c7d2ff7c89b4c9599b1b8c13ab2fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718224817714758801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ff254
1bb480d1e29fe0cdcb21ac962bbb63edc50c303d905d5df9c801bb3f,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718224817578288212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4350e829162646be417b019b2cb971
ff3a4548b2e65be4e5d7cc77a69a513de1,PodSandboxId:7156712f8ff2d4b1d06493d07a671bf6c4cf93c4fa5f096208275e7832fc39de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224817463520758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d98b9e0f5051ab363ca02821c8f8d231f5298a04d44f3f40a1ac8a145a70e570,PodSandboxId:7ad
55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718224817496833882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82299b1981f41471feb0a36cd022834e98c7a620a668655d739be255454304da,PodSandboxI
d:a2cb079d37a3df3a47fa418b51318b536fcacbe99a2d5d5e64178be7ae8c9e95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224817415619422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf561296e89021cfeb3942a411fdb1a39d363d089d6c0e3abc9f21a0ed0a02b,PodSandboxId:2791c645324815b106b
820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718224812970582901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209b6c2b28de4f9be36a8b96a42fd0658f8741138b54758c0a4036332c38a03b,PodSandboxId:40c46a3d0827b647af9e44003959e84272fa458e2637139dc12e33
0df8ecc125,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812829188315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:177da54ccde0b134f353821e30d94d485a45f9d5c67619d03d4ff3935aed495d,PodSandboxId:1c7b0383df5e6c2039396c35f89b50155ef1ff7d02214ba0dd246af1bfc68f23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812774314531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718224321149910871,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kuberne
tes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119278718424,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119207439239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718224113786859746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718224093469660512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718224093415553992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a9f2b40-6840-45fd-93d7-ea87cc3a1f93 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.393910879Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51d5e2ad-a07c-4cf2-8d7f-5278418fec88 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.394003058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51d5e2ad-a07c-4cf2-8d7f-5278418fec88 name=/runtime.v1.RuntimeService/Version
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.394855073Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4517ed49-97c0-4a81-b071-520c1cdea336 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.395377131Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718225152395354560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4517ed49-97c0-4a81-b071-520c1cdea336 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.396180745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9fbe731-de43-4479-b47d-63c5691ed54c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.396350354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9fbe731-de43-4479-b47d-63c5691ed54c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 20:45:52 ha-844626 crio[3826]: time="2024-06-12 20:45:52.397003793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7032c9d518b83b22af1468d51f671cd78fe893958d313f9a62c6310e07e5eb6c,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718224899813940041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f432160e35b26c7b012ec4edfd7d00508fb15c4cc8f9547df1507fa19a6dabee,PodSandboxId:2791c645324815b106b820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718224879817848716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe7677b75490eb9a887b3192e914a38bbc5fd772111c9a731fd0c67b961eea,PodSandboxId:7ad55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718224860802741608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c4a2d567239f2cf47396cba150c895012356b8ff9c055eafd3490a6316c791,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718224856808159694,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705c2561cc55952a4ac898f54cc34444e53d2f4bdfa63cf7bd8c2ebb56472f73,PodSandboxId:641e7ec9022152f82e52e566a21ce495ad6fccbd26b6cd0a919ea39bd3bc1dea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718224851037770649,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kubernetes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd3375cfa65cdf6427956610a22c5ad458ab15dcb4c60281d661e3b46f921ce,PodSandboxId:5c95de2f00554564828f54094401e5fec4db5051d05d38940ffd64de85b81037,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718224849810548318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d94c16d7-da82-41e3-82fe-83ed6e581f69,},Annotations:map[string]string{io.kubernetes.container.hash: eb905b5b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8445efcf36094d2712e6d7eeebc0e6b73520b6f1f530e37bbf40c8108e6e326e,PodSandboxId:b9d9b289b932c027eadfd224d1f9763c600e3cd5b391176fe10b1d15c75c0302,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718224832658819585,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578a6adb37c07fb3ddb14c1b9f4fcd3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:847d6ff92e8e601118971db1953ddd8cd8fd05b8a16cb89aef9e6bf5c67a8426,PodSandboxId:125f3e7aa763c8c93918780c5657199e412c7d2ff7c89b4c9599b1b8c13ab2fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718224817714758801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ff254
1bb480d1e29fe0cdcb21ac962bbb63edc50c303d905d5df9c801bb3f,PodSandboxId:ff49270d85d970b0f889abf2c5cac08bdd5a93e64ff68b1f01bede4838fa7236,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718224817578288212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d96acdf137cf3b5a36cb1641ff47f87,},Annotations:map[string]string{io.kubernetes.container.hash: 62f90dcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4350e829162646be417b019b2cb971
ff3a4548b2e65be4e5d7cc77a69a513de1,PodSandboxId:7156712f8ff2d4b1d06493d07a671bf6c4cf93c4fa5f096208275e7832fc39de,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718224817463520758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d98b9e0f5051ab363ca02821c8f8d231f5298a04d44f3f40a1ac8a145a70e570,PodSandboxId:7ad
55e7c88ed2ac77876690a89df525b2fdce8ad095f844595d3b93594241207,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718224817496833882,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a4dcb0404b2818e4d9a3c344a7e5d6,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82299b1981f41471feb0a36cd022834e98c7a620a668655d739be255454304da,PodSandboxI
d:a2cb079d37a3df3a47fa418b51318b536fcacbe99a2d5d5e64178be7ae8c9e95,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718224817415619422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf561296e89021cfeb3942a411fdb1a39d363d089d6c0e3abc9f21a0ed0a02b,PodSandboxId:2791c645324815b106b
820f82eaffaeaf6536e8d6fa05febd6572abb05adc4ef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718224812970582901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mthnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49950bb0-368d-4239-ae93-04c980a8b531,},Annotations:map[string]string{io.kubernetes.container.hash: 966f9966,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209b6c2b28de4f9be36a8b96a42fd0658f8741138b54758c0a4036332c38a03b,PodSandboxId:40c46a3d0827b647af9e44003959e84272fa458e2637139dc12e33
0df8ecc125,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812829188315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:177da54ccde0b134f353821e30d94d485a45f9d5c67619d03d4ff3935aed495d,PodSandboxId:1c7b0383df5e6c2039396c35f89b50155ef1ff7d02214ba0dd246af1bfc68f23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718224812774314531,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort
\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf4b3ead47f7dfc1b7faf2419e80a004cb2158ced9fe68be13277115f3c6569,PodSandboxId:61e1e7d7b51fb162f2b35a8ec5e7995fd71c9ac25c2006c7272938dbfa7cb819,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718224321149910871,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bdzsx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74f96190-8d97-478c-b01d-de61520289be,},Annotations:map[string]string{io.kuberne
tes.container.hash: 7dfe825e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0,PodSandboxId:43f0b5e0d015c6d4a627c066631b29cea7dc9b1e5202e19393c423d6d28be65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119278718424,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxd6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d25d78-6fa7-4dc7-9cf2-e2fac796f194,},Annotations:map[string]string{io.kubernetes.container.hash: 472d1d72,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff,PodSandboxId:5dcd51ad312e16089044b578a1792d8851306ab15ecdb29fe98927b50a88c840,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718224119207439239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bqzvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22b3ba0-1a59-4066-9db5-380986d73dca,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee9073d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c,PodSandboxId:4e233e0bc3bb763d91867e794034095b52904e58b126becdd2cbf30ecfd45887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718224113786859746,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69ctp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c66149e8-2a69-4f1f-9ddc-5e272204e6f5,},Annotations:map[string]string{io.kubernetes.container.hash: a7af5ce3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d,PodSandboxId:b0297d465b2518f1f34a2ba7759ab2d2ca7379ea1b8d3c12b5c98a6543796fd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718224093469660512,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eeb7c1880efee41beff2f38986d6a2f,},Annotations:map[string]string{io.kubernetes.container.hash: b3fa62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92,PodSandboxId:5512a35ec1cf114ac6eb1f16a78ada4574f36f7c30f15344eb5647a90d1d9568,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718224093415553992,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-844626,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a445b2a0c4cdfeb60569362c5f7933,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9fbe731-de43-4479-b47d-63c5691ed54c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7032c9d518b83       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   5c95de2f00554       storage-provisioner
	f432160e35b26       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      4 minutes ago       Running             kindnet-cni               3                   2791c64532481       kindnet-mthnq
	27fe7677b7549       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      4 minutes ago       Running             kube-controller-manager   2                   7ad55e7c88ed2       kube-controller-manager-ha-844626
	64c4a2d567239       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      4 minutes ago       Running             kube-apiserver            3                   ff49270d85d97       kube-apiserver-ha-844626
	705c2561cc559       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   641e7ec902215       busybox-fc5497c4f-bdzsx
	6cd3375cfa65c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   5c95de2f00554       storage-provisioner
	8445efcf36094       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   b9d9b289b932c       kube-vip-ha-844626
	847d6ff92e8e6       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      5 minutes ago       Running             kube-proxy                1                   125f3e7aa763c       kube-proxy-69ctp
	c99ff2541bb48       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      5 minutes ago       Exited              kube-apiserver            2                   ff49270d85d97       kube-apiserver-ha-844626
	d98b9e0f5051a       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      5 minutes ago       Exited              kube-controller-manager   1                   7ad55e7c88ed2       kube-controller-manager-ha-844626
	4350e82916264       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   7156712f8ff2d       etcd-ha-844626
	82299b1981f41       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      5 minutes ago       Running             kube-scheduler            1                   a2cb079d37a3d       kube-scheduler-ha-844626
	ecf561296e890       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      5 minutes ago       Exited              kindnet-cni               2                   2791c64532481       kindnet-mthnq
	209b6c2b28de4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   40c46a3d0827b       coredns-7db6d8ff4d-bqzvn
	177da54ccde0b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   1c7b0383df5e6       coredns-7db6d8ff4d-lxd6n
	ccf4b3ead47f7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   61e1e7d7b51fb       busybox-fc5497c4f-bdzsx
	5eb15a71cbeec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   43f0b5e0d015c       coredns-7db6d8ff4d-lxd6n
	6f896bc7211fd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   5dcd51ad312e1       coredns-7db6d8ff4d-bqzvn
	b028950fdf37b       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      17 minutes ago      Exited              kube-proxy                0                   4e233e0bc3bb7       kube-proxy-69ctp
	6255c7db8bcf2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   b0297d465b251       etcd-ha-844626
	223d45eb38f84       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      17 minutes ago      Exited              kube-scheduler            0                   5512a35ec1cf1       kube-scheduler-ha-844626
	
	
	==> coredns [177da54ccde0b134f353821e30d94d485a45f9d5c67619d03d4ff3935aed495d] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1824826505]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Jun-2024 20:40:21.024) (total time: 10001ms):
	Trace[1824826505]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:40:31.026)
	Trace[1824826505]: [10.001539554s] [10.001539554s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:51602->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:51602->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [209b6c2b28de4f9be36a8b96a42fd0658f8741138b54758c0a4036332c38a03b] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1170774030]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Jun-2024 20:40:19.764) (total time: 10001ms):
	Trace[1170774030]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:40:29.765)
	Trace[1170774030]: [10.001659789s] [10.001659789s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1540583505]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Jun-2024 20:40:23.841) (total time: 10001ms):
	Trace[1540583505]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:40:33.842)
	Trace[1540583505]: [10.0015879s] [10.0015879s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57402->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57402->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57410->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57410->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [5eb15a71cbeec1316cd995a62e99dd00c942a2939fde1af1eefd6e6de5e21ff0] <==
	[INFO] 10.244.2.2:46088 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001813687s
	[INFO] 10.244.2.2:41288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099916s
	[INFO] 10.244.2.2:50111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001353864s
	[INFO] 10.244.2.2:58718 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071988s
	[INFO] 10.244.2.2:53104 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063402s
	[INFO] 10.244.2.2:33504 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000200272s
	[INFO] 10.244.0.4:57974 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068404s
	[INFO] 10.244.1.2:36180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000396478s
	[INFO] 10.244.1.2:44974 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143897s
	[INFO] 10.244.2.2:45916 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153283s
	[INFO] 10.244.2.2:54255 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107674s
	[INFO] 10.244.2.2:37490 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120001s
	[INFO] 10.244.2.2:35084 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008018s
	[INFO] 10.244.0.4:39477 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000273278s
	[INFO] 10.244.1.2:48205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158614s
	[INFO] 10.244.1.2:59881 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158202s
	[INFO] 10.244.1.2:35567 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000472197s
	[INFO] 10.244.1.2:56490 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000211826s
	[INFO] 10.244.2.2:48246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156952s
	[INFO] 10.244.2.2:43466 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117313s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=2004&timeout=5m58s&timeoutSeconds=358&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=2002&timeout=9m58s&timeoutSeconds=598&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=2004&timeout=8m46s&timeoutSeconds=526&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6f896bc7211fd382fb408caae82c39ebefe7ef9bd443eb760bba8c0c09fd5fff] <==
	[INFO] 10.244.0.4:56242 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009694s
	[INFO] 10.244.0.4:50224 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170892s
	[INFO] 10.244.0.4:50347 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139284s
	[INFO] 10.244.0.4:43967 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.022155051s
	[INFO] 10.244.0.4:34878 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000206851s
	[INFO] 10.244.1.2:46797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00034142s
	[INFO] 10.244.1.2:43369 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000248825s
	[INFO] 10.244.1.2:56650 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001632154s
	[INFO] 10.244.2.2:38141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172487s
	[INFO] 10.244.2.2:60906 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158767s
	[INFO] 10.244.0.4:40480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117274s
	[INFO] 10.244.0.4:47149 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000771s
	[INFO] 10.244.0.4:56834 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000323893s
	[INFO] 10.244.1.2:44664 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146272s
	[INFO] 10.244.1.2:47748 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110683s
	[INFO] 10.244.0.4:39510 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159779s
	[INFO] 10.244.0.4:49210 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125351s
	[INFO] 10.244.0.4:48326 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000179032s
	[INFO] 10.244.2.2:38296 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150584s
	[INFO] 10.244.2.2:58162 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116767s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-844626
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T20_28_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:45:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:41:00 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:41:00 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:41:00 +0000   Wed, 12 Jun 2024 20:28:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:41:00 +0000   Wed, 12 Jun 2024 20:28:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    ha-844626
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca8d79507bbc4f44bf947af92833058f
	  System UUID:                ca8d7950-7bbc-4f44-bf94-7af92833058f
	  Boot ID:                    da0f0a2a-5126-4bca-9f1f-744b30254ff4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bdzsx              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-bqzvn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-lxd6n             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-844626                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-mthnq                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-844626             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-844626    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-69ctp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-844626             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-844626                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4m51s              kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-844626 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node ha-844626 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-844626 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     17m                kubelet          Node ha-844626 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node ha-844626 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node ha-844626 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           17m                node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal   NodeReady                17m                kubelet          Node ha-844626 status is now: NodeReady
	  Normal   RegisteredNode           15m                node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Warning  ContainerGCFailed        6m33s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m42s              node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal   RegisteredNode           4m39s              node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	  Normal   RegisteredNode           3m11s              node-controller  Node ha-844626 event: Registered Node ha-844626 in Controller
	
	
	Name:               ha-844626-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T20_30_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:30:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:45:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:41:43 +0000   Wed, 12 Jun 2024 20:41:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:41:43 +0000   Wed, 12 Jun 2024 20:41:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:41:43 +0000   Wed, 12 Jun 2024 20:41:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:41:43 +0000   Wed, 12 Jun 2024 20:41:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    ha-844626-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc34ec9a17c449479c11e07f628f1a6e
	  System UUID:                fc34ec9a-17c4-4947-9c11-e07f628f1a6e
	  Boot ID:                    46eea217-77e1-490e-ade1-0905b3fafd17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bh59q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-844626-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-fz6bl                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-844626-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-844626-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-f7ct8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-844626-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-844626-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m24s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-844626-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-844626-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-844626-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-844626-m02 status is now: NodeNotReady
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node ha-844626-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node ha-844626-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node ha-844626-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           4m39s                  node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-844626-m02 event: Registered Node ha-844626-m02 in Controller
	
	
	Name:               ha-844626-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-844626-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=ha-844626
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T20_32_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:32:35 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-844626-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:43:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 12 Jun 2024 20:43:04 +0000   Wed, 12 Jun 2024 20:44:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 12 Jun 2024 20:43:04 +0000   Wed, 12 Jun 2024 20:44:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 12 Jun 2024 20:43:04 +0000   Wed, 12 Jun 2024 20:44:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 12 Jun 2024 20:43:04 +0000   Wed, 12 Jun 2024 20:44:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    ha-844626-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 76e9ad048f36466a8cb780349dbd0fce
	  System UUID:                76e9ad04-8f36-466a-8cb7-80349dbd0fce
	  Boot ID:                    5ccfdbf7-4568-4904-ac71-2a48c42eb716
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-brwx8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-pwr4p              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-dbk2r           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)      kubelet          Node ha-844626-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)      kubelet          Node ha-844626-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)      kubelet          Node ha-844626-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-844626-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m42s                  node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   RegisteredNode           4m39s                  node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-844626-m04 event: Registered Node ha-844626-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-844626-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-844626-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-844626-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-844626-m04 has been rebooted, boot id: 5ccfdbf7-4568-4904-ac71-2a48c42eb716
	  Normal   NodeReady                2m48s                  kubelet          Node ha-844626-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 4m2s)    node-controller  Node ha-844626-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.063983] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073055] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.159207] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.152158] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.286482] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.221083] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +4.069110] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.063782] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.293152] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.089558] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.977157] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.420198] kauditd_printk_skb: 38 callbacks suppressed
	[Jun12 20:30] kauditd_printk_skb: 26 callbacks suppressed
	[Jun12 20:40] systemd-fstab-generator[3745]: Ignoring "noauto" option for root device
	[  +0.151873] systemd-fstab-generator[3757]: Ignoring "noauto" option for root device
	[  +0.179450] systemd-fstab-generator[3771]: Ignoring "noauto" option for root device
	[  +0.147532] systemd-fstab-generator[3783]: Ignoring "noauto" option for root device
	[  +0.285076] systemd-fstab-generator[3811]: Ignoring "noauto" option for root device
	[  +3.505544] systemd-fstab-generator[3914]: Ignoring "noauto" option for root device
	[  +1.298931] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.066926] kauditd_printk_skb: 73 callbacks suppressed
	[ +14.874362] kauditd_printk_skb: 15 callbacks suppressed
	[ +23.997592] kauditd_printk_skb: 5 callbacks suppressed
	[Jun12 20:41] kauditd_printk_skb: 3 callbacks suppressed
	[ +30.195544] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [4350e829162646be417b019b2cb971ff3a4548b2e65be4e5d7cc77a69a513de1] <==
	{"level":"info","ts":"2024-06-12T20:42:22.809862Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:42:22.824165Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a14f9258d3b66c75","to":"d724031a215d8a63","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-12T20:42:22.824364Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:42:22.833375Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a14f9258d3b66c75","to":"d724031a215d8a63","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-12T20:42:22.833515Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"warn","ts":"2024-06-12T20:42:23.388567Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d724031a215d8a63","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-12T20:42:23.388707Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d724031a215d8a63","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-12T20:42:36.488745Z","caller":"traceutil/trace.go:171","msg":"trace[1345592607] transaction","detail":"{read_only:false; response_revision:2638; number_of_response:1; }","duration":"155.189584ms","start":"2024-06-12T20:42:36.333517Z","end":"2024-06-12T20:42:36.488707Z","steps":["trace[1345592607] 'process raft request'  (duration: 155.021037ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:43:17.587987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 switched to configuration voters=(11623670073473264757 15152587952431553527)"}
	{"level":"info","ts":"2024-06-12T20:43:17.590453Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","removed-remote-peer-id":"d724031a215d8a63","removed-remote-peer-urls":["https://192.168.39.76:2380"]}
	{"level":"info","ts":"2024-06-12T20:43:17.59066Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d724031a215d8a63"}
	{"level":"warn","ts":"2024-06-12T20:43:17.591185Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:43:17.591424Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d724031a215d8a63"}
	{"level":"warn","ts":"2024-06-12T20:43:17.592173Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:43:17.592428Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:43:17.592743Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"warn","ts":"2024-06-12T20:43:17.593065Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63","error":"context canceled"}
	{"level":"warn","ts":"2024-06-12T20:43:17.593145Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d724031a215d8a63","error":"failed to read d724031a215d8a63 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-06-12T20:43:17.59326Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"warn","ts":"2024-06-12T20:43:17.594436Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63","error":"context canceled"}
	{"level":"info","ts":"2024-06-12T20:43:17.59455Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:43:17.594652Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:43:17.594692Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"a14f9258d3b66c75","removed-remote-peer-id":"d724031a215d8a63"}
	{"level":"warn","ts":"2024-06-12T20:43:17.604169Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a14f9258d3b66c75","remote-peer-id-stream-handler":"a14f9258d3b66c75","remote-peer-id-from":"d724031a215d8a63"}
	{"level":"warn","ts":"2024-06-12T20:43:17.614316Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a14f9258d3b66c75","remote-peer-id-stream-handler":"a14f9258d3b66c75","remote-peer-id-from":"d724031a215d8a63"}
	
	
	==> etcd [6255c7db8bcf221092e924b958073cc807f289b2fed8ea5763d24bed91878a8d] <==
	{"level":"info","ts":"2024-06-12T20:38:35.687945Z","caller":"traceutil/trace.go:171","msg":"trace[892167329] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; }","duration":"858.920602ms","start":"2024-06-12T20:38:34.829017Z","end":"2024-06-12T20:38:35.687937Z","steps":["trace[892167329] 'agreement among raft nodes before linearized reading'  (duration: 853.724812ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:38:35.687958Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:38:34.829013Z","time spent":"858.940572ms","remote":"127.0.0.1:40880","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 "}
	2024/06/12 20:38:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-06-12T20:38:35.687553Z","caller":"traceutil/trace.go:171","msg":"trace[1513091851] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"863.128292ms","start":"2024-06-12T20:38:34.824414Z","end":"2024-06-12T20:38:35.687543Z","steps":["trace[1513091851] 'agreement among raft nodes before linearized reading'  (duration: 858.150993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:38:35.691771Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:38:34.824411Z","time spent":"867.164025ms","remote":"127.0.0.1:40990","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:10000 "}
	2024/06/12 20:38:35 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-12T20:38:35.806633Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":7815311118762690248,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-06-12T20:38:35.809183Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a14f9258d3b66c75","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-12T20:38:35.809447Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809463Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809487Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809658Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809712Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809776Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.80981Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d248ce75fc8bdbf7"}
	{"level":"info","ts":"2024-06-12T20:38:35.809819Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.809832Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.80985Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.809919Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.809965Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.810015Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.810027Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d724031a215d8a63"}
	{"level":"info","ts":"2024-06-12T20:38:35.813128Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-06-12T20:38:35.813451Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-06-12T20:38:35.813492Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-844626","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"]}
	
	
	==> kernel <==
	 20:45:53 up 18 min,  0 users,  load average: 0.52, 0.88, 0.63
	Linux ha-844626 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ecf561296e89021cfeb3942a411fdb1a39d363d089d6c0e3abc9f21a0ed0a02b] <==
	I0612 20:40:13.415752       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0612 20:40:13.415920       1 main.go:107] hostIP = 192.168.39.196
	podIP = 192.168.39.196
	I0612 20:40:13.416128       1 main.go:116] setting mtu 1500 for CNI 
	I0612 20:40:13.416177       1 main.go:146] kindnetd IP family: "ipv4"
	I0612 20:40:13.416277       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0612 20:40:13.718903       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0612 20:40:13.719486       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0612 20:40:19.104394       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0612 20:40:22.172348       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0612 20:40:35.174055       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [f432160e35b26c7b012ec4edfd7d00508fb15c4cc8f9547df1507fa19a6dabee] <==
	I0612 20:45:11.164433       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:45:21.173096       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:45:21.173372       1 main.go:227] handling current node
	I0612 20:45:21.173421       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:45:21.173445       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:45:21.173629       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:45:21.173654       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:45:31.188581       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:45:31.188693       1 main.go:227] handling current node
	I0612 20:45:31.188732       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:45:31.188750       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:45:31.188918       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:45:31.188962       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:45:41.195492       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:45:41.195578       1 main.go:227] handling current node
	I0612 20:45:41.195676       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:45:41.195707       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:45:41.195835       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:45:41.195855       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	I0612 20:45:51.207358       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0612 20:45:51.207397       1 main.go:227] handling current node
	I0612 20:45:51.207412       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I0612 20:45:51.207417       1 main.go:250] Node ha-844626-m02 has CIDR [10.244.1.0/24] 
	I0612 20:45:51.207517       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0612 20:45:51.207539       1 main.go:250] Node ha-844626-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [64c4a2d567239f2cf47396cba150c895012356b8ff9c055eafd3490a6316c791] <==
	I0612 20:40:58.647073       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0612 20:40:58.647616       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 20:40:58.647727       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 20:40:58.734804       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 20:40:58.734838       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 20:40:58.743881       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 20:40:58.744703       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 20:40:58.745376       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 20:40:58.762702       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 20:40:58.763539       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 20:40:58.763848       1 aggregator.go:165] initial CRD sync complete...
	I0612 20:40:58.764036       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 20:40:58.764150       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 20:40:58.764248       1 cache.go:39] Caches are synced for autoregister controller
	I0612 20:40:58.767308       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0612 20:40:58.776570       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.76]
	I0612 20:40:58.787779       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 20:40:58.790150       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 20:40:58.790306       1 policy_source.go:224] refreshing policies
	I0612 20:40:58.863062       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 20:40:58.880004       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 20:40:58.892022       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0612 20:40:58.896507       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0612 20:40:59.650063       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0612 20:41:00.017192       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.108 192.168.39.196 192.168.39.76]
	
	
	==> kube-apiserver [c99ff2541bb480d1e29fe0cdcb21ac962bbb63edc50c303d905d5df9c801bb3f] <==
	I0612 20:40:18.011157       1 options.go:221] external host was not specified, using 192.168.39.196
	I0612 20:40:18.012170       1 server.go:148] Version: v1.30.1
	I0612 20:40:18.012264       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:40:19.040471       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0612 20:40:19.041694       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0612 20:40:19.041727       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0612 20:40:19.041876       1 instance.go:299] Using reconciler: lease
	I0612 20:40:19.042293       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0612 20:40:39.037547       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0612 20:40:39.037547       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0612 20:40:39.042591       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [27fe7677b75490eb9a887b3192e914a38bbc5fd772111c9a731fd0c67b961eea] <==
	E0612 20:43:14.519853       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0612 20:43:14.580480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.58136ms"
	E0612 20:43:14.580566       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0612 20:43:14.580662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.145µs"
	I0612 20:43:14.591367       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.901µs"
	I0612 20:43:16.334645       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.013µs"
	I0612 20:43:16.385336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.358µs"
	I0612 20:43:16.687857       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.491µs"
	I0612 20:43:16.698672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.46µs"
	I0612 20:43:17.709741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.994593ms"
	I0612 20:43:17.709915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.554µs"
	I0612 20:43:30.139991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-844626-m04"
	E0612 20:43:30.196318       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"ha-844626-m03", UID:"0a0f23dd-cf3b-4007-8a89-1871d4554a51", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerW
ait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-844626-m03", UID:"7b2b3ffa-8a44-4d6a-8a76-c0342e012ba5", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io "ha-844626-m03" not found
	E0612 20:43:33.128371       1 gc_controller.go:153] "Failed to get node" err="node \"ha-844626-m03\" not found" logger="pod-garbage-collector-controller" node="ha-844626-m03"
	E0612 20:43:33.128431       1 gc_controller.go:153] "Failed to get node" err="node \"ha-844626-m03\" not found" logger="pod-garbage-collector-controller" node="ha-844626-m03"
	E0612 20:43:33.128438       1 gc_controller.go:153] "Failed to get node" err="node \"ha-844626-m03\" not found" logger="pod-garbage-collector-controller" node="ha-844626-m03"
	E0612 20:43:33.128443       1 gc_controller.go:153] "Failed to get node" err="node \"ha-844626-m03\" not found" logger="pod-garbage-collector-controller" node="ha-844626-m03"
	E0612 20:43:33.128454       1 gc_controller.go:153] "Failed to get node" err="node \"ha-844626-m03\" not found" logger="pod-garbage-collector-controller" node="ha-844626-m03"
	E0612 20:43:53.129014       1 gc_controller.go:153] "Failed to get node" err="node \"ha-844626-m03\" not found" logger="pod-garbage-collector-controller" node="ha-844626-m03"
	E0612 20:43:53.129063       1 gc_controller.go:153] "Failed to get node" err="node \"ha-844626-m03\" not found" logger="pod-garbage-collector-controller" node="ha-844626-m03"
	E0612 20:43:53.129069       1 gc_controller.go:153] "Failed to get node" err="node \"ha-844626-m03\" not found" logger="pod-garbage-collector-controller" node="ha-844626-m03"
	E0612 20:43:53.129075       1 gc_controller.go:153] "Failed to get node" err="node \"ha-844626-m03\" not found" logger="pod-garbage-collector-controller" node="ha-844626-m03"
	E0612 20:43:53.129080       1 gc_controller.go:153] "Failed to get node" err="node \"ha-844626-m03\" not found" logger="pod-garbage-collector-controller" node="ha-844626-m03"
	I0612 20:44:05.732892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.9571ms"
	I0612 20:44:05.733781       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.312µs"
	
	
	==> kube-controller-manager [d98b9e0f5051ab363ca02821c8f8d231f5298a04d44f3f40a1ac8a145a70e570] <==
	I0612 20:40:19.075323       1 serving.go:380] Generated self-signed cert in-memory
	I0612 20:40:19.507560       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 20:40:19.507672       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:40:19.509538       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 20:40:19.509673       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 20:40:19.510258       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 20:40:19.510357       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0612 20:40:40.050959       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.196:8443/healthz\": dial tcp 192.168.39.196:8443: connect: connection refused"
	
	
	==> kube-proxy [847d6ff92e8e601118971db1953ddd8cd8fd05b8a16cb89aef9e6bf5c67a8426] <==
	I0612 20:40:19.621512       1 server_linux.go:69] "Using iptables proxy"
	E0612 20:40:20.252414       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-844626\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0612 20:40:23.324158       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-844626\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0612 20:40:26.396673       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-844626\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0612 20:40:32.541086       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-844626\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0612 20:40:41.756043       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-844626\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0612 20:41:00.940649       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.196"]
	I0612 20:41:01.072731       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:41:01.074736       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:41:01.074972       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:41:01.128326       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:41:01.128928       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:41:01.130334       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:41:01.153523       1 config.go:192] "Starting service config controller"
	I0612 20:41:01.153631       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:41:01.153750       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:41:01.153853       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:41:01.164088       1 config.go:319] "Starting node config controller"
	I0612 20:41:01.164148       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:41:01.254012       1 shared_informer.go:320] Caches are synced for service config
	I0612 20:41:01.254149       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 20:41:01.264498       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b028950fdf37b06d0930b11bec038a982a84719da0974a1238ef96e30f1b786c] <==
	E0612 20:37:25.149289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:28.221435       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:28.221546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:28.221934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:28.222026       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:28.222290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:28.222376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:34.364149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:34.364340       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:34.364563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:34.364737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:34.364990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:34.365157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:43.580082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:43.580256       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:46.652328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:46.652392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:37:46.652915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:37:46.652984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:38:02.012602       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:38:02.012767       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-844626&resourceVersion=2003": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:38:11.228580       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:38:11.228673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	W0612 20:38:11.228767       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0612 20:38:11.228813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [223d45eb38f840f0addf592b54b25f587ac32bee0ec1b2b7de20a493f170da92] <==
	W0612 20:38:33.275679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0612 20:38:33.275887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0612 20:38:33.493638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 20:38:33.493814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 20:38:33.525359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 20:38:33.525457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 20:38:33.712127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 20:38:33.712336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 20:38:33.797538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0612 20:38:33.797617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0612 20:38:34.221988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0612 20:38:34.222098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0612 20:38:34.273726       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0612 20:38:34.273777       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0612 20:38:34.715597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 20:38:34.715650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 20:38:34.919263       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0612 20:38:34.919311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 20:38:35.315461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 20:38:35.315559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 20:38:35.430126       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 20:38:35.430155       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 20:38:35.642096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:38:35.642125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:38:35.658901       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [82299b1981f41471feb0a36cd022834e98c7a620a668655d739be255454304da] <==
	W0612 20:40:49.066597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:49.066708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:49.285735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.196:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:49.285795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.196:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:49.466421       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.196:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:49.466473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.196:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:49.711304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.196:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:49.711360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.196:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:50.141481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:50.141593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:56.548331       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:56.548402       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:56.915997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0612 20:40:56.916072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0612 20:40:58.657555       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 20:40:58.657643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 20:40:58.657895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0612 20:40:58.657945       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 20:40:58.657991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:40:58.657999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 20:40:59.363006       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0612 20:43:14.262180       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-brwx8\": pod busybox-fc5497c4f-brwx8 is already assigned to node \"ha-844626-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-brwx8" node="ha-844626-m04"
	E0612 20:43:14.262338       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 624fc7ed-de17-4e7b-81f2-6797529dc20e(default/busybox-fc5497c4f-brwx8) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-brwx8"
	E0612 20:43:14.262376       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-brwx8\": pod busybox-fc5497c4f-brwx8 is already assigned to node \"ha-844626-m04\"" pod="default/busybox-fc5497c4f-brwx8"
	I0612 20:43:14.262397       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-brwx8" node="ha-844626-m04"
	
	
	==> kubelet <==
	Jun 12 20:41:25 ha-844626 kubelet[1371]: E0612 20:41:25.790692    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d94c16d7-da82-41e3-82fe-83ed6e581f69)\"" pod="kube-system/storage-provisioner" podUID="d94c16d7-da82-41e3-82fe-83ed6e581f69"
	Jun 12 20:41:39 ha-844626 kubelet[1371]: I0612 20:41:39.790556    1371 scope.go:117] "RemoveContainer" containerID="6cd3375cfa65cdf6427956610a22c5ad458ab15dcb4c60281d661e3b46f921ce"
	Jun 12 20:42:07 ha-844626 kubelet[1371]: I0612 20:42:07.790440    1371 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-844626" podUID="654fd183-21b0-4df5-b557-ed676c5ecb71"
	Jun 12 20:42:07 ha-844626 kubelet[1371]: I0612 20:42:07.812508    1371 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-844626"
	Jun 12 20:42:08 ha-844626 kubelet[1371]: I0612 20:42:08.005452    1371 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-844626" podUID="654fd183-21b0-4df5-b557-ed676c5ecb71"
	Jun 12 20:42:19 ha-844626 kubelet[1371]: E0612 20:42:19.810138    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:42:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:42:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:42:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:42:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:43:19 ha-844626 kubelet[1371]: E0612 20:43:19.806557    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:43:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:43:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:43:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:43:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:44:19 ha-844626 kubelet[1371]: E0612 20:44:19.807049    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:44:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:44:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:44:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:44:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:45:19 ha-844626 kubelet[1371]: E0612 20:45:19.806579    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:45:19 ha-844626 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:45:19 ha-844626 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:45:19 ha-844626 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:45:19 ha-844626 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 20:45:51.973746   41630 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17779-14199/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-844626 -n ha-844626
helpers_test.go:261: (dbg) Run:  kubectl --context ha-844626 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (308.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-991051
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-991051
E0612 21:01:48.613854   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-991051: exit status 82 (2m2.690306483s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-991051-m03"  ...
	* Stopping node "multinode-991051-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-991051" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-991051 --wait=true -v=8 --alsologtostderr
E0612 21:04:51.660955   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 21:04:56.707378   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-991051 --wait=true -v=8 --alsologtostderr: (3m3.561936122s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-991051
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-991051 -n multinode-991051
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-991051 logs -n 25: (1.51916284s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp multinode-991051-m02:/home/docker/cp-test.txt                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile839762677/001/cp-test_multinode-991051-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp multinode-991051-m02:/home/docker/cp-test.txt                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051:/home/docker/cp-test_multinode-991051-m02_multinode-991051.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n multinode-991051 sudo cat                                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | /home/docker/cp-test_multinode-991051-m02_multinode-991051.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp multinode-991051-m02:/home/docker/cp-test.txt                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03:/home/docker/cp-test_multinode-991051-m02_multinode-991051-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n multinode-991051-m03 sudo cat                                   | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | /home/docker/cp-test_multinode-991051-m02_multinode-991051-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp testdata/cp-test.txt                                                | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp multinode-991051-m03:/home/docker/cp-test.txt                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile839762677/001/cp-test_multinode-991051-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp multinode-991051-m03:/home/docker/cp-test.txt                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051:/home/docker/cp-test_multinode-991051-m03_multinode-991051.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n multinode-991051 sudo cat                                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | /home/docker/cp-test_multinode-991051-m03_multinode-991051.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp multinode-991051-m03:/home/docker/cp-test.txt                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m02:/home/docker/cp-test_multinode-991051-m03_multinode-991051-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n multinode-991051-m02 sudo cat                                   | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | /home/docker/cp-test_multinode-991051-m03_multinode-991051-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-991051 node stop m03                                                          | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	| node    | multinode-991051 node start                                                             | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-991051                                                                | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC |                     |
	| stop    | -p multinode-991051                                                                     | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC |                     |
	| start   | -p multinode-991051                                                                     | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:02 UTC | 12 Jun 24 21:05 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-991051                                                                | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:05 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:02:41
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:02:41.560828   50965 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:02:41.561090   50965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:02:41.561100   50965 out.go:304] Setting ErrFile to fd 2...
	I0612 21:02:41.561105   50965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:02:41.561358   50965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:02:41.561943   50965 out.go:298] Setting JSON to false
	I0612 21:02:41.563122   50965 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6307,"bootTime":1718219855,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:02:41.563354   50965 start.go:139] virtualization: kvm guest
	I0612 21:02:41.565940   50965 out.go:177] * [multinode-991051] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:02:41.567775   50965 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:02:41.567718   50965 notify.go:220] Checking for updates...
	I0612 21:02:41.569355   50965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:02:41.570837   50965 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:02:41.572337   50965 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:02:41.573682   50965 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:02:41.574842   50965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:02:41.576689   50965 config.go:182] Loaded profile config "multinode-991051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:02:41.576804   50965 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:02:41.577260   50965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:02:41.577312   50965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:02:41.592924   50965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I0612 21:02:41.593305   50965 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:02:41.593931   50965 main.go:141] libmachine: Using API Version  1
	I0612 21:02:41.593958   50965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:02:41.594327   50965 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:02:41.594560   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:02:41.630095   50965 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:02:41.631534   50965 start.go:297] selected driver: kvm2
	I0612 21:02:41.631552   50965 start.go:901] validating driver "kvm2" against &{Name:multinode-991051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-991051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:02:41.631707   50965 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:02:41.632050   50965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:02:41.632147   50965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:02:41.647752   50965 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:02:41.648486   50965 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:02:41.648552   50965 cni.go:84] Creating CNI manager for ""
	I0612 21:02:41.648563   50965 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0612 21:02:41.648615   50965 start.go:340] cluster config:
	{Name:multinode-991051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-991051 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:02:41.648738   50965 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:02:41.650733   50965 out.go:177] * Starting "multinode-991051" primary control-plane node in "multinode-991051" cluster
	I0612 21:02:41.651996   50965 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:02:41.652033   50965 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 21:02:41.652040   50965 cache.go:56] Caching tarball of preloaded images
	I0612 21:02:41.652161   50965 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:02:41.652176   50965 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 21:02:41.652296   50965 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/config.json ...
	I0612 21:02:41.652498   50965 start.go:360] acquireMachinesLock for multinode-991051: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:02:41.652543   50965 start.go:364] duration metric: took 25.837µs to acquireMachinesLock for "multinode-991051"
	I0612 21:02:41.652560   50965 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:02:41.652579   50965 fix.go:54] fixHost starting: 
	I0612 21:02:41.652817   50965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:02:41.652854   50965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:02:41.667685   50965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45277
	I0612 21:02:41.668112   50965 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:02:41.668546   50965 main.go:141] libmachine: Using API Version  1
	I0612 21:02:41.668564   50965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:02:41.668898   50965 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:02:41.669102   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:02:41.669268   50965 main.go:141] libmachine: (multinode-991051) Calling .GetState
	I0612 21:02:41.670761   50965 fix.go:112] recreateIfNeeded on multinode-991051: state=Running err=<nil>
	W0612 21:02:41.670787   50965 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:02:41.672788   50965 out.go:177] * Updating the running kvm2 "multinode-991051" VM ...
	I0612 21:02:41.674303   50965 machine.go:94] provisionDockerMachine start ...
	I0612 21:02:41.674325   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:02:41.674532   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:41.676878   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.677410   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:41.677440   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.677613   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:02:41.677783   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.677941   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.678076   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:02:41.678240   50965 main.go:141] libmachine: Using SSH client type: native
	I0612 21:02:41.678417   50965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0612 21:02:41.678427   50965 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:02:41.784724   50965 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-991051
	
	I0612 21:02:41.784748   50965 main.go:141] libmachine: (multinode-991051) Calling .GetMachineName
	I0612 21:02:41.784993   50965 buildroot.go:166] provisioning hostname "multinode-991051"
	I0612 21:02:41.785022   50965 main.go:141] libmachine: (multinode-991051) Calling .GetMachineName
	I0612 21:02:41.785188   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:41.788277   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.788769   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:41.788802   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.788901   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:02:41.789082   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.789242   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.789384   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:02:41.789538   50965 main.go:141] libmachine: Using SSH client type: native
	I0612 21:02:41.789716   50965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0612 21:02:41.789740   50965 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-991051 && echo "multinode-991051" | sudo tee /etc/hostname
	I0612 21:02:41.908580   50965 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-991051
	
	I0612 21:02:41.908607   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:41.911098   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.911495   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:41.911523   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.911687   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:02:41.911888   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.912014   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.912164   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:02:41.912302   50965 main.go:141] libmachine: Using SSH client type: native
	I0612 21:02:41.912485   50965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0612 21:02:41.912507   50965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-991051' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-991051/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-991051' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:02:42.016566   50965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:02:42.016620   50965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:02:42.016659   50965 buildroot.go:174] setting up certificates
	I0612 21:02:42.016672   50965 provision.go:84] configureAuth start
	I0612 21:02:42.016689   50965 main.go:141] libmachine: (multinode-991051) Calling .GetMachineName
	I0612 21:02:42.016948   50965 main.go:141] libmachine: (multinode-991051) Calling .GetIP
	I0612 21:02:42.019343   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.019717   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:42.019744   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.019864   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:42.022110   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.022473   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:42.022501   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.022649   50965 provision.go:143] copyHostCerts
	I0612 21:02:42.022682   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:02:42.022731   50965 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:02:42.022740   50965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:02:42.022823   50965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:02:42.022917   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:02:42.022942   50965 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:02:42.022952   50965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:02:42.022987   50965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:02:42.023061   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:02:42.023091   50965 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:02:42.023100   50965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:02:42.023132   50965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:02:42.023203   50965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.multinode-991051 san=[127.0.0.1 192.168.39.222 localhost minikube multinode-991051]
	I0612 21:02:42.077719   50965 provision.go:177] copyRemoteCerts
	I0612 21:02:42.077773   50965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:02:42.077793   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:42.080158   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.080455   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:42.080487   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.080658   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:02:42.080827   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:42.080980   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:02:42.081159   50965 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/multinode-991051/id_rsa Username:docker}
	I0612 21:02:42.164442   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0612 21:02:42.164524   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:02:42.191120   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0612 21:02:42.191208   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0612 21:02:42.216174   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0612 21:02:42.216254   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:02:42.241394   50965 provision.go:87] duration metric: took 224.705211ms to configureAuth
	I0612 21:02:42.241439   50965 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:02:42.241688   50965 config.go:182] Loaded profile config "multinode-991051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:02:42.241756   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:42.244235   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.244639   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:42.244679   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.244868   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:02:42.245048   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:42.245203   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:42.245368   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:02:42.245528   50965 main.go:141] libmachine: Using SSH client type: native
	I0612 21:02:42.245726   50965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0612 21:02:42.245746   50965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:04:13.100101   50965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:04:13.100130   50965 machine.go:97] duration metric: took 1m31.425815689s to provisionDockerMachine
	I0612 21:04:13.100148   50965 start.go:293] postStartSetup for "multinode-991051" (driver="kvm2")
	I0612 21:04:13.100174   50965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:04:13.100211   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:04:13.100556   50965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:04:13.100589   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:04:13.103889   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.104410   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:13.104443   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.104615   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:04:13.104812   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:04:13.105099   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:04:13.105243   50965 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/multinode-991051/id_rsa Username:docker}
	I0612 21:04:13.187857   50965 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:04:13.192342   50965 command_runner.go:130] > NAME=Buildroot
	I0612 21:04:13.192358   50965 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0612 21:04:13.192363   50965 command_runner.go:130] > ID=buildroot
	I0612 21:04:13.192368   50965 command_runner.go:130] > VERSION_ID=2023.02.9
	I0612 21:04:13.192376   50965 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0612 21:04:13.192397   50965 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:04:13.192414   50965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:04:13.192481   50965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:04:13.192557   50965 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:04:13.192568   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /etc/ssl/certs/214442.pem
	I0612 21:04:13.192654   50965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:04:13.202763   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:04:13.227793   50965 start.go:296] duration metric: took 127.629637ms for postStartSetup
	I0612 21:04:13.227865   50965 fix.go:56] duration metric: took 1m31.575292097s for fixHost
	I0612 21:04:13.227891   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:04:13.230482   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.230900   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:13.230926   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.231077   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:04:13.231267   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:04:13.231419   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:04:13.231557   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:04:13.231751   50965 main.go:141] libmachine: Using SSH client type: native
	I0612 21:04:13.231911   50965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0612 21:04:13.231921   50965 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:04:13.332212   50965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718226253.307559674
	
	I0612 21:04:13.332237   50965 fix.go:216] guest clock: 1718226253.307559674
	I0612 21:04:13.332243   50965 fix.go:229] Guest: 2024-06-12 21:04:13.307559674 +0000 UTC Remote: 2024-06-12 21:04:13.227870843 +0000 UTC m=+91.702103711 (delta=79.688831ms)
	I0612 21:04:13.332268   50965 fix.go:200] guest clock delta is within tolerance: 79.688831ms
	I0612 21:04:13.332272   50965 start.go:83] releasing machines lock for "multinode-991051", held for 1m31.679719168s
	I0612 21:04:13.332301   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:04:13.332562   50965 main.go:141] libmachine: (multinode-991051) Calling .GetIP
	I0612 21:04:13.335254   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.335617   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:13.335641   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.335840   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:04:13.336548   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:04:13.336766   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:04:13.336866   50965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:04:13.336915   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:04:13.337007   50965 ssh_runner.go:195] Run: cat /version.json
	I0612 21:04:13.337027   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:04:13.339754   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.339823   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.340143   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:13.340162   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.340178   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:13.340201   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.340338   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:04:13.340472   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:04:13.340546   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:04:13.340629   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:04:13.340688   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:04:13.340831   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:04:13.340848   50965 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/multinode-991051/id_rsa Username:docker}
	I0612 21:04:13.340937   50965 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/multinode-991051/id_rsa Username:docker}
	I0612 21:04:13.426323   50965 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0612 21:04:13.426835   50965 ssh_runner.go:195] Run: systemctl --version
	I0612 21:04:13.452322   50965 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0612 21:04:13.452360   50965 command_runner.go:130] > systemd 252 (252)
	I0612 21:04:13.452398   50965 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0612 21:04:13.452474   50965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:04:13.629439   50965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 21:04:13.637420   50965 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0612 21:04:13.637705   50965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:04:13.637767   50965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:04:13.677848   50965 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0612 21:04:13.677890   50965 start.go:494] detecting cgroup driver to use...
	I0612 21:04:13.677970   50965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:04:13.696870   50965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:04:13.713136   50965 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:04:13.713186   50965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:04:13.727143   50965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:04:13.740953   50965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:04:13.887957   50965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:04:14.036302   50965 docker.go:233] disabling docker service ...
	I0612 21:04:14.036379   50965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:04:14.054356   50965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:04:14.068441   50965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:04:14.215814   50965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:04:14.364290   50965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:04:14.378568   50965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:04:14.398442   50965 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0612 21:04:14.398498   50965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:04:14.398553   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.411379   50965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:04:14.411464   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.423229   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.435757   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.447736   50965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:04:14.460212   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.471962   50965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.484209   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.495578   50965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:04:14.505738   50965 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0612 21:04:14.505820   50965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:04:14.516112   50965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:04:14.656386   50965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:04:20.127985   50965 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.471566471s)
	I0612 21:04:20.128022   50965 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:04:20.128066   50965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:04:20.133277   50965 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0612 21:04:20.133306   50965 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0612 21:04:20.133329   50965 command_runner.go:130] > Device: 0,22	Inode: 1342        Links: 1
	I0612 21:04:20.133341   50965 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0612 21:04:20.133349   50965 command_runner.go:130] > Access: 2024-06-12 21:04:19.979078385 +0000
	I0612 21:04:20.133358   50965 command_runner.go:130] > Modify: 2024-06-12 21:04:19.979078385 +0000
	I0612 21:04:20.133366   50965 command_runner.go:130] > Change: 2024-06-12 21:04:19.979078385 +0000
	I0612 21:04:20.133371   50965 command_runner.go:130] >  Birth: -
	I0612 21:04:20.133419   50965 start.go:562] Will wait 60s for crictl version
	I0612 21:04:20.133466   50965 ssh_runner.go:195] Run: which crictl
	I0612 21:04:20.137444   50965 command_runner.go:130] > /usr/bin/crictl
	I0612 21:04:20.137547   50965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:04:20.178278   50965 command_runner.go:130] > Version:  0.1.0
	I0612 21:04:20.178305   50965 command_runner.go:130] > RuntimeName:  cri-o
	I0612 21:04:20.178313   50965 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0612 21:04:20.178322   50965 command_runner.go:130] > RuntimeApiVersion:  v1
	I0612 21:04:20.178344   50965 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:04:20.178396   50965 ssh_runner.go:195] Run: crio --version
	I0612 21:04:20.207067   50965 command_runner.go:130] > crio version 1.29.1
	I0612 21:04:20.207097   50965 command_runner.go:130] > Version:        1.29.1
	I0612 21:04:20.207107   50965 command_runner.go:130] > GitCommit:      unknown
	I0612 21:04:20.207113   50965 command_runner.go:130] > GitCommitDate:  unknown
	I0612 21:04:20.207125   50965 command_runner.go:130] > GitTreeState:   clean
	I0612 21:04:20.207143   50965 command_runner.go:130] > BuildDate:      2024-06-06T15:30:03Z
	I0612 21:04:20.207147   50965 command_runner.go:130] > GoVersion:      go1.21.6
	I0612 21:04:20.207151   50965 command_runner.go:130] > Compiler:       gc
	I0612 21:04:20.207155   50965 command_runner.go:130] > Platform:       linux/amd64
	I0612 21:04:20.207159   50965 command_runner.go:130] > Linkmode:       dynamic
	I0612 21:04:20.207163   50965 command_runner.go:130] > BuildTags:      
	I0612 21:04:20.207167   50965 command_runner.go:130] >   containers_image_ostree_stub
	I0612 21:04:20.207189   50965 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0612 21:04:20.207195   50965 command_runner.go:130] >   btrfs_noversion
	I0612 21:04:20.207203   50965 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0612 21:04:20.207211   50965 command_runner.go:130] >   libdm_no_deferred_remove
	I0612 21:04:20.207216   50965 command_runner.go:130] >   seccomp
	I0612 21:04:20.207222   50965 command_runner.go:130] > LDFlags:          unknown
	I0612 21:04:20.207226   50965 command_runner.go:130] > SeccompEnabled:   true
	I0612 21:04:20.207232   50965 command_runner.go:130] > AppArmorEnabled:  false
	I0612 21:04:20.207321   50965 ssh_runner.go:195] Run: crio --version
	I0612 21:04:20.236266   50965 command_runner.go:130] > crio version 1.29.1
	I0612 21:04:20.236293   50965 command_runner.go:130] > Version:        1.29.1
	I0612 21:04:20.236302   50965 command_runner.go:130] > GitCommit:      unknown
	I0612 21:04:20.236309   50965 command_runner.go:130] > GitCommitDate:  unknown
	I0612 21:04:20.236317   50965 command_runner.go:130] > GitTreeState:   clean
	I0612 21:04:20.236326   50965 command_runner.go:130] > BuildDate:      2024-06-06T15:30:03Z
	I0612 21:04:20.236333   50965 command_runner.go:130] > GoVersion:      go1.21.6
	I0612 21:04:20.236340   50965 command_runner.go:130] > Compiler:       gc
	I0612 21:04:20.236348   50965 command_runner.go:130] > Platform:       linux/amd64
	I0612 21:04:20.236355   50965 command_runner.go:130] > Linkmode:       dynamic
	I0612 21:04:20.236362   50965 command_runner.go:130] > BuildTags:      
	I0612 21:04:20.236374   50965 command_runner.go:130] >   containers_image_ostree_stub
	I0612 21:04:20.236382   50965 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0612 21:04:20.236389   50965 command_runner.go:130] >   btrfs_noversion
	I0612 21:04:20.236396   50965 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0612 21:04:20.236403   50965 command_runner.go:130] >   libdm_no_deferred_remove
	I0612 21:04:20.236409   50965 command_runner.go:130] >   seccomp
	I0612 21:04:20.236419   50965 command_runner.go:130] > LDFlags:          unknown
	I0612 21:04:20.236426   50965 command_runner.go:130] > SeccompEnabled:   true
	I0612 21:04:20.236435   50965 command_runner.go:130] > AppArmorEnabled:  false
	I0612 21:04:20.239737   50965 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:04:20.241132   50965 main.go:141] libmachine: (multinode-991051) Calling .GetIP
	I0612 21:04:20.243954   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:20.244357   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:20.244384   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:20.244556   50965 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 21:04:20.248987   50965 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0612 21:04:20.249164   50965 kubeadm.go:877] updating cluster {Name:multinode-991051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-991051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:04:20.249319   50965 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:04:20.249374   50965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:04:20.299319   50965 command_runner.go:130] > {
	I0612 21:04:20.299341   50965 command_runner.go:130] >   "images": [
	I0612 21:04:20.299355   50965 command_runner.go:130] >     {
	I0612 21:04:20.299366   50965 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0612 21:04:20.299373   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.299384   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0612 21:04:20.299390   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299401   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.299413   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0612 21:04:20.299424   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0612 21:04:20.299433   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299442   50965 command_runner.go:130] >       "size": "65291810",
	I0612 21:04:20.299452   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.299460   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.299473   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.299483   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.299489   50965 command_runner.go:130] >     },
	I0612 21:04:20.299494   50965 command_runner.go:130] >     {
	I0612 21:04:20.299505   50965 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0612 21:04:20.299515   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.299524   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0612 21:04:20.299530   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299540   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.299553   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0612 21:04:20.299571   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0612 21:04:20.299578   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299585   50965 command_runner.go:130] >       "size": "65908273",
	I0612 21:04:20.299593   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.299604   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.299614   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.299621   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.299627   50965 command_runner.go:130] >     },
	I0612 21:04:20.299634   50965 command_runner.go:130] >     {
	I0612 21:04:20.299647   50965 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0612 21:04:20.299658   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.299670   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0612 21:04:20.299679   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299687   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.299703   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0612 21:04:20.299719   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0612 21:04:20.299729   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299737   50965 command_runner.go:130] >       "size": "1363676",
	I0612 21:04:20.299745   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.299759   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.299769   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.299778   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.299787   50965 command_runner.go:130] >     },
	I0612 21:04:20.299793   50965 command_runner.go:130] >     {
	I0612 21:04:20.299806   50965 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0612 21:04:20.299814   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.299827   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0612 21:04:20.299835   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299843   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.299859   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0612 21:04:20.299885   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0612 21:04:20.299893   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299900   50965 command_runner.go:130] >       "size": "31470524",
	I0612 21:04:20.299906   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.299913   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.299923   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.299931   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.299940   50965 command_runner.go:130] >     },
	I0612 21:04:20.299946   50965 command_runner.go:130] >     {
	I0612 21:04:20.299961   50965 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0612 21:04:20.299970   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.299981   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0612 21:04:20.299989   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299996   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300012   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0612 21:04:20.300044   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0612 21:04:20.300053   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300058   50965 command_runner.go:130] >       "size": "61245718",
	I0612 21:04:20.300065   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.300073   50965 command_runner.go:130] >       "username": "nonroot",
	I0612 21:04:20.300082   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300089   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300102   50965 command_runner.go:130] >     },
	I0612 21:04:20.300111   50965 command_runner.go:130] >     {
	I0612 21:04:20.300124   50965 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0612 21:04:20.300139   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300151   50965 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0612 21:04:20.300160   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300167   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300179   50965 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0612 21:04:20.300194   50965 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0612 21:04:20.300204   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300212   50965 command_runner.go:130] >       "size": "150779692",
	I0612 21:04:20.300219   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.300229   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.300236   50965 command_runner.go:130] >       },
	I0612 21:04:20.300250   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.300260   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300268   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300274   50965 command_runner.go:130] >     },
	I0612 21:04:20.300278   50965 command_runner.go:130] >     {
	I0612 21:04:20.300288   50965 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0612 21:04:20.300298   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300311   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0612 21:04:20.300319   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300327   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300342   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0612 21:04:20.300359   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0612 21:04:20.300367   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300374   50965 command_runner.go:130] >       "size": "117601759",
	I0612 21:04:20.300383   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.300390   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.300400   50965 command_runner.go:130] >       },
	I0612 21:04:20.300407   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.300417   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300425   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300432   50965 command_runner.go:130] >     },
	I0612 21:04:20.300440   50965 command_runner.go:130] >     {
	I0612 21:04:20.300452   50965 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0612 21:04:20.300461   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300471   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0612 21:04:20.300487   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300497   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300529   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0612 21:04:20.300545   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0612 21:04:20.300554   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300562   50965 command_runner.go:130] >       "size": "112170310",
	I0612 21:04:20.300571   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.300578   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.300586   50965 command_runner.go:130] >       },
	I0612 21:04:20.300591   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.300596   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300601   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300606   50965 command_runner.go:130] >     },
	I0612 21:04:20.300612   50965 command_runner.go:130] >     {
	I0612 21:04:20.300621   50965 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0612 21:04:20.300627   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300635   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0612 21:04:20.300640   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300647   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300659   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0612 21:04:20.300672   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0612 21:04:20.300678   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300684   50965 command_runner.go:130] >       "size": "85933465",
	I0612 21:04:20.300691   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.300697   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.300703   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300713   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300720   50965 command_runner.go:130] >     },
	I0612 21:04:20.300728   50965 command_runner.go:130] >     {
	I0612 21:04:20.300742   50965 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0612 21:04:20.300752   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300761   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0612 21:04:20.300771   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300780   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300796   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0612 21:04:20.300812   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0612 21:04:20.300829   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300839   50965 command_runner.go:130] >       "size": "63026504",
	I0612 21:04:20.300849   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.300857   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.300865   50965 command_runner.go:130] >       },
	I0612 21:04:20.300872   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.300882   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300888   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300896   50965 command_runner.go:130] >     },
	I0612 21:04:20.300903   50965 command_runner.go:130] >     {
	I0612 21:04:20.300916   50965 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0612 21:04:20.300927   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300937   50965 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0612 21:04:20.300945   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300952   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300967   50965 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0612 21:04:20.300982   50965 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0612 21:04:20.300991   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300999   50965 command_runner.go:130] >       "size": "750414",
	I0612 21:04:20.301008   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.301017   50965 command_runner.go:130] >         "value": "65535"
	I0612 21:04:20.301031   50965 command_runner.go:130] >       },
	I0612 21:04:20.301041   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.301051   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.301059   50965 command_runner.go:130] >       "pinned": true
	I0612 21:04:20.301067   50965 command_runner.go:130] >     }
	I0612 21:04:20.301073   50965 command_runner.go:130] >   ]
	I0612 21:04:20.301081   50965 command_runner.go:130] > }
	I0612 21:04:20.301300   50965 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:04:20.301314   50965 crio.go:433] Images already preloaded, skipping extraction
	I0612 21:04:20.301383   50965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:04:20.342496   50965 command_runner.go:130] > {
	I0612 21:04:20.342525   50965 command_runner.go:130] >   "images": [
	I0612 21:04:20.342532   50965 command_runner.go:130] >     {
	I0612 21:04:20.342549   50965 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0612 21:04:20.342553   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342563   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0612 21:04:20.342567   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342571   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342579   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0612 21:04:20.342586   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0612 21:04:20.342590   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342595   50965 command_runner.go:130] >       "size": "65291810",
	I0612 21:04:20.342598   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.342602   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.342607   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.342611   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.342615   50965 command_runner.go:130] >     },
	I0612 21:04:20.342618   50965 command_runner.go:130] >     {
	I0612 21:04:20.342630   50965 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0612 21:04:20.342638   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342642   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0612 21:04:20.342646   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342649   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342656   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0612 21:04:20.342662   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0612 21:04:20.342666   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342671   50965 command_runner.go:130] >       "size": "65908273",
	I0612 21:04:20.342674   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.342682   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.342688   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.342701   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.342707   50965 command_runner.go:130] >     },
	I0612 21:04:20.342710   50965 command_runner.go:130] >     {
	I0612 21:04:20.342717   50965 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0612 21:04:20.342723   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342728   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0612 21:04:20.342731   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342735   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342745   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0612 21:04:20.342751   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0612 21:04:20.342758   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342761   50965 command_runner.go:130] >       "size": "1363676",
	I0612 21:04:20.342765   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.342769   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.342773   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.342777   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.342780   50965 command_runner.go:130] >     },
	I0612 21:04:20.342783   50965 command_runner.go:130] >     {
	I0612 21:04:20.342789   50965 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0612 21:04:20.342796   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342801   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0612 21:04:20.342806   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342811   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342820   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0612 21:04:20.342840   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0612 21:04:20.342846   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342850   50965 command_runner.go:130] >       "size": "31470524",
	I0612 21:04:20.342854   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.342858   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.342861   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.342865   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.342869   50965 command_runner.go:130] >     },
	I0612 21:04:20.342872   50965 command_runner.go:130] >     {
	I0612 21:04:20.342879   50965 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0612 21:04:20.342883   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342888   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0612 21:04:20.342894   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342897   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342904   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0612 21:04:20.342914   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0612 21:04:20.342917   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342922   50965 command_runner.go:130] >       "size": "61245718",
	I0612 21:04:20.342928   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.342931   50965 command_runner.go:130] >       "username": "nonroot",
	I0612 21:04:20.342935   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.342938   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.342942   50965 command_runner.go:130] >     },
	I0612 21:04:20.342945   50965 command_runner.go:130] >     {
	I0612 21:04:20.342953   50965 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0612 21:04:20.342960   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342964   50965 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0612 21:04:20.342970   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342974   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342982   50965 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0612 21:04:20.342991   50965 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0612 21:04:20.342999   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343008   50965 command_runner.go:130] >       "size": "150779692",
	I0612 21:04:20.343016   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.343019   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.343025   50965 command_runner.go:130] >       },
	I0612 21:04:20.343034   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343040   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343044   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.343050   50965 command_runner.go:130] >     },
	I0612 21:04:20.343053   50965 command_runner.go:130] >     {
	I0612 21:04:20.343061   50965 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0612 21:04:20.343066   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.343071   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0612 21:04:20.343077   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343081   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.343091   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0612 21:04:20.343100   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0612 21:04:20.343106   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343110   50965 command_runner.go:130] >       "size": "117601759",
	I0612 21:04:20.343116   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.343120   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.343125   50965 command_runner.go:130] >       },
	I0612 21:04:20.343129   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343135   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343139   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.343145   50965 command_runner.go:130] >     },
	I0612 21:04:20.343148   50965 command_runner.go:130] >     {
	I0612 21:04:20.343156   50965 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0612 21:04:20.343162   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.343184   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0612 21:04:20.343193   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343199   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.343223   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0612 21:04:20.343235   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0612 21:04:20.343239   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343248   50965 command_runner.go:130] >       "size": "112170310",
	I0612 21:04:20.343254   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.343258   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.343264   50965 command_runner.go:130] >       },
	I0612 21:04:20.343268   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343274   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343283   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.343289   50965 command_runner.go:130] >     },
	I0612 21:04:20.343292   50965 command_runner.go:130] >     {
	I0612 21:04:20.343300   50965 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0612 21:04:20.343304   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.343311   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0612 21:04:20.343315   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343319   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.343326   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0612 21:04:20.343336   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0612 21:04:20.343340   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343344   50965 command_runner.go:130] >       "size": "85933465",
	I0612 21:04:20.343347   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.343351   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343355   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343359   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.343363   50965 command_runner.go:130] >     },
	I0612 21:04:20.343366   50965 command_runner.go:130] >     {
	I0612 21:04:20.343372   50965 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0612 21:04:20.343376   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.343381   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0612 21:04:20.343385   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343388   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.343398   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0612 21:04:20.343404   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0612 21:04:20.343410   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343413   50965 command_runner.go:130] >       "size": "63026504",
	I0612 21:04:20.343417   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.343421   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.343433   50965 command_runner.go:130] >       },
	I0612 21:04:20.343437   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343441   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343448   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.343451   50965 command_runner.go:130] >     },
	I0612 21:04:20.343454   50965 command_runner.go:130] >     {
	I0612 21:04:20.343462   50965 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0612 21:04:20.343470   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.343477   50965 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0612 21:04:20.343480   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343484   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.343495   50965 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0612 21:04:20.343505   50965 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0612 21:04:20.343508   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343512   50965 command_runner.go:130] >       "size": "750414",
	I0612 21:04:20.343515   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.343519   50965 command_runner.go:130] >         "value": "65535"
	I0612 21:04:20.343524   50965 command_runner.go:130] >       },
	I0612 21:04:20.343531   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343537   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343545   50965 command_runner.go:130] >       "pinned": true
	I0612 21:04:20.343550   50965 command_runner.go:130] >     }
	I0612 21:04:20.343558   50965 command_runner.go:130] >   ]
	I0612 21:04:20.343562   50965 command_runner.go:130] > }
	I0612 21:04:20.343736   50965 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:04:20.343751   50965 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:04:20.343767   50965 kubeadm.go:928] updating node { 192.168.39.222 8443 v1.30.1 crio true true} ...
	I0612 21:04:20.343884   50965 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-991051 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-991051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:04:20.343945   50965 ssh_runner.go:195] Run: crio config
	I0612 21:04:20.387207   50965 command_runner.go:130] ! time="2024-06-12 21:04:20.362161457Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0612 21:04:20.392923   50965 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0612 21:04:20.406718   50965 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0612 21:04:20.406743   50965 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0612 21:04:20.406750   50965 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0612 21:04:20.406753   50965 command_runner.go:130] > #
	I0612 21:04:20.406759   50965 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0612 21:04:20.406765   50965 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0612 21:04:20.406771   50965 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0612 21:04:20.406778   50965 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0612 21:04:20.406781   50965 command_runner.go:130] > # reload'.
	I0612 21:04:20.406787   50965 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0612 21:04:20.406792   50965 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0612 21:04:20.406798   50965 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0612 21:04:20.406803   50965 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0612 21:04:20.406806   50965 command_runner.go:130] > [crio]
	I0612 21:04:20.406812   50965 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0612 21:04:20.406817   50965 command_runner.go:130] > # containers images, in this directory.
	I0612 21:04:20.406831   50965 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0612 21:04:20.406841   50965 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0612 21:04:20.406849   50965 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0612 21:04:20.406859   50965 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0612 21:04:20.406862   50965 command_runner.go:130] > # imagestore = ""
	I0612 21:04:20.406868   50965 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0612 21:04:20.406875   50965 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0612 21:04:20.406879   50965 command_runner.go:130] > storage_driver = "overlay"
	I0612 21:04:20.406895   50965 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0612 21:04:20.406903   50965 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0612 21:04:20.406907   50965 command_runner.go:130] > storage_option = [
	I0612 21:04:20.406911   50965 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0612 21:04:20.406914   50965 command_runner.go:130] > ]
	I0612 21:04:20.406920   50965 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0612 21:04:20.406926   50965 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0612 21:04:20.406931   50965 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0612 21:04:20.406936   50965 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0612 21:04:20.406942   50965 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0612 21:04:20.406946   50965 command_runner.go:130] > # always happen on a node reboot
	I0612 21:04:20.406954   50965 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0612 21:04:20.406966   50965 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0612 21:04:20.406974   50965 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0612 21:04:20.406979   50965 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0612 21:04:20.406983   50965 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0612 21:04:20.406993   50965 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0612 21:04:20.407000   50965 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0612 21:04:20.407008   50965 command_runner.go:130] > # internal_wipe = true
	I0612 21:04:20.407019   50965 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0612 21:04:20.407025   50965 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0612 21:04:20.407031   50965 command_runner.go:130] > # internal_repair = false
	I0612 21:04:20.407036   50965 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0612 21:04:20.407045   50965 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0612 21:04:20.407050   50965 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0612 21:04:20.407057   50965 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0612 21:04:20.407063   50965 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0612 21:04:20.407068   50965 command_runner.go:130] > [crio.api]
	I0612 21:04:20.407077   50965 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0612 21:04:20.407085   50965 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0612 21:04:20.407090   50965 command_runner.go:130] > # IP address on which the stream server will listen.
	I0612 21:04:20.407096   50965 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0612 21:04:20.407102   50965 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0612 21:04:20.407109   50965 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0612 21:04:20.407113   50965 command_runner.go:130] > # stream_port = "0"
	I0612 21:04:20.407118   50965 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0612 21:04:20.407125   50965 command_runner.go:130] > # stream_enable_tls = false
	I0612 21:04:20.407140   50965 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0612 21:04:20.407147   50965 command_runner.go:130] > # stream_idle_timeout = ""
	I0612 21:04:20.407153   50965 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0612 21:04:20.407161   50965 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0612 21:04:20.407165   50965 command_runner.go:130] > # minutes.
	I0612 21:04:20.407181   50965 command_runner.go:130] > # stream_tls_cert = ""
	I0612 21:04:20.407193   50965 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0612 21:04:20.407206   50965 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0612 21:04:20.407213   50965 command_runner.go:130] > # stream_tls_key = ""
	I0612 21:04:20.407219   50965 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0612 21:04:20.407229   50965 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0612 21:04:20.407261   50965 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0612 21:04:20.407269   50965 command_runner.go:130] > # stream_tls_ca = ""
	I0612 21:04:20.407276   50965 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0612 21:04:20.407280   50965 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0612 21:04:20.407287   50965 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0612 21:04:20.407292   50965 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0612 21:04:20.407302   50965 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0612 21:04:20.407309   50965 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0612 21:04:20.407315   50965 command_runner.go:130] > [crio.runtime]
	I0612 21:04:20.407321   50965 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0612 21:04:20.407329   50965 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0612 21:04:20.407332   50965 command_runner.go:130] > # "nofile=1024:2048"
	I0612 21:04:20.407341   50965 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0612 21:04:20.407345   50965 command_runner.go:130] > # default_ulimits = [
	I0612 21:04:20.407348   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407354   50965 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0612 21:04:20.407365   50965 command_runner.go:130] > # no_pivot = false
	I0612 21:04:20.407371   50965 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0612 21:04:20.407379   50965 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0612 21:04:20.407384   50965 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0612 21:04:20.407391   50965 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0612 21:04:20.407396   50965 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0612 21:04:20.407405   50965 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0612 21:04:20.407410   50965 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0612 21:04:20.407416   50965 command_runner.go:130] > # Cgroup setting for conmon
	I0612 21:04:20.407423   50965 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0612 21:04:20.407429   50965 command_runner.go:130] > conmon_cgroup = "pod"
	I0612 21:04:20.407435   50965 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0612 21:04:20.407440   50965 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0612 21:04:20.407447   50965 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0612 21:04:20.407452   50965 command_runner.go:130] > conmon_env = [
	I0612 21:04:20.407457   50965 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0612 21:04:20.407463   50965 command_runner.go:130] > ]
	I0612 21:04:20.407468   50965 command_runner.go:130] > # Additional environment variables to set for all the
	I0612 21:04:20.407473   50965 command_runner.go:130] > # containers. These are overridden if set in the
	I0612 21:04:20.407481   50965 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0612 21:04:20.407485   50965 command_runner.go:130] > # default_env = [
	I0612 21:04:20.407491   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407496   50965 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0612 21:04:20.407505   50965 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0612 21:04:20.407509   50965 command_runner.go:130] > # selinux = false
	I0612 21:04:20.407517   50965 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0612 21:04:20.407522   50965 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0612 21:04:20.407528   50965 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0612 21:04:20.407532   50965 command_runner.go:130] > # seccomp_profile = ""
	I0612 21:04:20.407537   50965 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0612 21:04:20.407543   50965 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0612 21:04:20.407551   50965 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0612 21:04:20.407556   50965 command_runner.go:130] > # which might increase security.
	I0612 21:04:20.407561   50965 command_runner.go:130] > # This option is currently deprecated,
	I0612 21:04:20.407566   50965 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0612 21:04:20.407573   50965 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0612 21:04:20.407583   50965 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0612 21:04:20.407591   50965 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0612 21:04:20.407597   50965 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0612 21:04:20.407605   50965 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0612 21:04:20.407610   50965 command_runner.go:130] > # This option supports live configuration reload.
	I0612 21:04:20.407617   50965 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0612 21:04:20.407622   50965 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0612 21:04:20.407626   50965 command_runner.go:130] > # the cgroup blockio controller.
	I0612 21:04:20.407630   50965 command_runner.go:130] > # blockio_config_file = ""
	I0612 21:04:20.407639   50965 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0612 21:04:20.407645   50965 command_runner.go:130] > # blockio parameters.
	I0612 21:04:20.407648   50965 command_runner.go:130] > # blockio_reload = false
	I0612 21:04:20.407657   50965 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0612 21:04:20.407661   50965 command_runner.go:130] > # irqbalance daemon.
	I0612 21:04:20.407674   50965 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0612 21:04:20.407682   50965 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0612 21:04:20.407689   50965 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0612 21:04:20.407697   50965 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0612 21:04:20.407703   50965 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0612 21:04:20.407712   50965 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0612 21:04:20.407717   50965 command_runner.go:130] > # This option supports live configuration reload.
	I0612 21:04:20.407721   50965 command_runner.go:130] > # rdt_config_file = ""
	I0612 21:04:20.407726   50965 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0612 21:04:20.407733   50965 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0612 21:04:20.407759   50965 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0612 21:04:20.407767   50965 command_runner.go:130] > # separate_pull_cgroup = ""
	I0612 21:04:20.407773   50965 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0612 21:04:20.407778   50965 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0612 21:04:20.407782   50965 command_runner.go:130] > # will be added.
	I0612 21:04:20.407786   50965 command_runner.go:130] > # default_capabilities = [
	I0612 21:04:20.407790   50965 command_runner.go:130] > # 	"CHOWN",
	I0612 21:04:20.407793   50965 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0612 21:04:20.407797   50965 command_runner.go:130] > # 	"FSETID",
	I0612 21:04:20.407800   50965 command_runner.go:130] > # 	"FOWNER",
	I0612 21:04:20.407804   50965 command_runner.go:130] > # 	"SETGID",
	I0612 21:04:20.407807   50965 command_runner.go:130] > # 	"SETUID",
	I0612 21:04:20.407816   50965 command_runner.go:130] > # 	"SETPCAP",
	I0612 21:04:20.407823   50965 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0612 21:04:20.407826   50965 command_runner.go:130] > # 	"KILL",
	I0612 21:04:20.407829   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407836   50965 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0612 21:04:20.407845   50965 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0612 21:04:20.407849   50965 command_runner.go:130] > # add_inheritable_capabilities = false
	I0612 21:04:20.407856   50965 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0612 21:04:20.407863   50965 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0612 21:04:20.407867   50965 command_runner.go:130] > default_sysctls = [
	I0612 21:04:20.407873   50965 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0612 21:04:20.407876   50965 command_runner.go:130] > ]
	I0612 21:04:20.407881   50965 command_runner.go:130] > # List of devices on the host that a
	I0612 21:04:20.407889   50965 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0612 21:04:20.407893   50965 command_runner.go:130] > # allowed_devices = [
	I0612 21:04:20.407900   50965 command_runner.go:130] > # 	"/dev/fuse",
	I0612 21:04:20.407903   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407908   50965 command_runner.go:130] > # List of additional devices. specified as
	I0612 21:04:20.407917   50965 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0612 21:04:20.407922   50965 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0612 21:04:20.407930   50965 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0612 21:04:20.407934   50965 command_runner.go:130] > # additional_devices = [
	I0612 21:04:20.407937   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407942   50965 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0612 21:04:20.407949   50965 command_runner.go:130] > # cdi_spec_dirs = [
	I0612 21:04:20.407953   50965 command_runner.go:130] > # 	"/etc/cdi",
	I0612 21:04:20.407958   50965 command_runner.go:130] > # 	"/var/run/cdi",
	I0612 21:04:20.407962   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407968   50965 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0612 21:04:20.407974   50965 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0612 21:04:20.407978   50965 command_runner.go:130] > # Defaults to false.
	I0612 21:04:20.407983   50965 command_runner.go:130] > # device_ownership_from_security_context = false
	I0612 21:04:20.407990   50965 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0612 21:04:20.407996   50965 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0612 21:04:20.408000   50965 command_runner.go:130] > # hooks_dir = [
	I0612 21:04:20.408007   50965 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0612 21:04:20.408017   50965 command_runner.go:130] > # ]
	I0612 21:04:20.408023   50965 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0612 21:04:20.408031   50965 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0612 21:04:20.408036   50965 command_runner.go:130] > # its default mounts from the following two files:
	I0612 21:04:20.408040   50965 command_runner.go:130] > #
	I0612 21:04:20.408046   50965 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0612 21:04:20.408054   50965 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0612 21:04:20.408059   50965 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0612 21:04:20.408065   50965 command_runner.go:130] > #
	I0612 21:04:20.408071   50965 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0612 21:04:20.408077   50965 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0612 21:04:20.408083   50965 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0612 21:04:20.408090   50965 command_runner.go:130] > #      only add mounts it finds in this file.
	I0612 21:04:20.408093   50965 command_runner.go:130] > #
	I0612 21:04:20.408097   50965 command_runner.go:130] > # default_mounts_file = ""
	I0612 21:04:20.408104   50965 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0612 21:04:20.408110   50965 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0612 21:04:20.408117   50965 command_runner.go:130] > pids_limit = 1024
	I0612 21:04:20.408122   50965 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0612 21:04:20.408130   50965 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0612 21:04:20.408138   50965 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0612 21:04:20.408145   50965 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0612 21:04:20.408152   50965 command_runner.go:130] > # log_size_max = -1
	I0612 21:04:20.408158   50965 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0612 21:04:20.408162   50965 command_runner.go:130] > # log_to_journald = false
	I0612 21:04:20.408170   50965 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0612 21:04:20.408175   50965 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0612 21:04:20.408182   50965 command_runner.go:130] > # Path to directory for container attach sockets.
	I0612 21:04:20.408187   50965 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0612 21:04:20.408194   50965 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0612 21:04:20.408199   50965 command_runner.go:130] > # bind_mount_prefix = ""
	I0612 21:04:20.408205   50965 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0612 21:04:20.408209   50965 command_runner.go:130] > # read_only = false
	I0612 21:04:20.408214   50965 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0612 21:04:20.408227   50965 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0612 21:04:20.408234   50965 command_runner.go:130] > # live configuration reload.
	I0612 21:04:20.408246   50965 command_runner.go:130] > # log_level = "info"
	I0612 21:04:20.408257   50965 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0612 21:04:20.408264   50965 command_runner.go:130] > # This option supports live configuration reload.
	I0612 21:04:20.408268   50965 command_runner.go:130] > # log_filter = ""
	I0612 21:04:20.408274   50965 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0612 21:04:20.408280   50965 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0612 21:04:20.408286   50965 command_runner.go:130] > # separated by comma.
	I0612 21:04:20.408293   50965 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0612 21:04:20.408299   50965 command_runner.go:130] > # uid_mappings = ""
	I0612 21:04:20.408305   50965 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0612 21:04:20.408315   50965 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0612 21:04:20.408319   50965 command_runner.go:130] > # separated by comma.
	I0612 21:04:20.408329   50965 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0612 21:04:20.408336   50965 command_runner.go:130] > # gid_mappings = ""
	I0612 21:04:20.408342   50965 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0612 21:04:20.408350   50965 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0612 21:04:20.408359   50965 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0612 21:04:20.408369   50965 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0612 21:04:20.408373   50965 command_runner.go:130] > # minimum_mappable_uid = -1
	I0612 21:04:20.408380   50965 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0612 21:04:20.408390   50965 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0612 21:04:20.408398   50965 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0612 21:04:20.408405   50965 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0612 21:04:20.408412   50965 command_runner.go:130] > # minimum_mappable_gid = -1
	I0612 21:04:20.408418   50965 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0612 21:04:20.408426   50965 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0612 21:04:20.408432   50965 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0612 21:04:20.408438   50965 command_runner.go:130] > # ctr_stop_timeout = 30
	I0612 21:04:20.408444   50965 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0612 21:04:20.408450   50965 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0612 21:04:20.408454   50965 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0612 21:04:20.408459   50965 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0612 21:04:20.408466   50965 command_runner.go:130] > drop_infra_ctr = false
	I0612 21:04:20.408471   50965 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0612 21:04:20.408479   50965 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0612 21:04:20.408487   50965 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0612 21:04:20.408497   50965 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0612 21:04:20.408507   50965 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0612 21:04:20.408513   50965 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0612 21:04:20.408520   50965 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0612 21:04:20.408525   50965 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0612 21:04:20.408531   50965 command_runner.go:130] > # shared_cpuset = ""
	I0612 21:04:20.408536   50965 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0612 21:04:20.408543   50965 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0612 21:04:20.408552   50965 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0612 21:04:20.408564   50965 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0612 21:04:20.408570   50965 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0612 21:04:20.408575   50965 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0612 21:04:20.408583   50965 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0612 21:04:20.408587   50965 command_runner.go:130] > # enable_criu_support = false
	I0612 21:04:20.408595   50965 command_runner.go:130] > # Enable/disable the generation of the container,
	I0612 21:04:20.408600   50965 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0612 21:04:20.408607   50965 command_runner.go:130] > # enable_pod_events = false
	I0612 21:04:20.408613   50965 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0612 21:04:20.408618   50965 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0612 21:04:20.408626   50965 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0612 21:04:20.408629   50965 command_runner.go:130] > # default_runtime = "runc"
	I0612 21:04:20.408635   50965 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0612 21:04:20.408642   50965 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0612 21:04:20.408653   50965 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0612 21:04:20.408660   50965 command_runner.go:130] > # creation as a file is not desired either.
	I0612 21:04:20.408671   50965 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0612 21:04:20.408678   50965 command_runner.go:130] > # the hostname is being managed dynamically.
	I0612 21:04:20.408682   50965 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0612 21:04:20.408688   50965 command_runner.go:130] > # ]
	I0612 21:04:20.408694   50965 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0612 21:04:20.408702   50965 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0612 21:04:20.408709   50965 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0612 21:04:20.408716   50965 command_runner.go:130] > # Each entry in the table should follow the format:
	I0612 21:04:20.408719   50965 command_runner.go:130] > #
	I0612 21:04:20.408724   50965 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0612 21:04:20.408737   50965 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0612 21:04:20.408782   50965 command_runner.go:130] > # runtime_type = "oci"
	I0612 21:04:20.408790   50965 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0612 21:04:20.408794   50965 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0612 21:04:20.408799   50965 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0612 21:04:20.408803   50965 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0612 21:04:20.408807   50965 command_runner.go:130] > # monitor_env = []
	I0612 21:04:20.408811   50965 command_runner.go:130] > # privileged_without_host_devices = false
	I0612 21:04:20.408815   50965 command_runner.go:130] > # allowed_annotations = []
	I0612 21:04:20.408820   50965 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0612 21:04:20.408826   50965 command_runner.go:130] > # Where:
	I0612 21:04:20.408832   50965 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0612 21:04:20.408840   50965 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0612 21:04:20.408846   50965 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0612 21:04:20.408854   50965 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0612 21:04:20.408858   50965 command_runner.go:130] > #   in $PATH.
	I0612 21:04:20.408863   50965 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0612 21:04:20.408871   50965 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0612 21:04:20.408877   50965 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0612 21:04:20.408883   50965 command_runner.go:130] > #   state.
	I0612 21:04:20.408889   50965 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0612 21:04:20.408897   50965 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0612 21:04:20.408902   50965 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0612 21:04:20.408907   50965 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0612 21:04:20.408916   50965 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0612 21:04:20.408922   50965 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0612 21:04:20.408929   50965 command_runner.go:130] > #   The currently recognized values are:
	I0612 21:04:20.408935   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0612 21:04:20.408942   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0612 21:04:20.408950   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0612 21:04:20.408958   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0612 21:04:20.408966   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0612 21:04:20.408974   50965 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0612 21:04:20.408980   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0612 21:04:20.408988   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0612 21:04:20.408994   50965 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0612 21:04:20.409004   50965 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0612 21:04:20.409016   50965 command_runner.go:130] > #   deprecated option "conmon".
	I0612 21:04:20.409025   50965 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0612 21:04:20.409031   50965 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0612 21:04:20.409040   50965 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0612 21:04:20.409045   50965 command_runner.go:130] > #   should be moved to the container's cgroup
	I0612 21:04:20.409054   50965 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0612 21:04:20.409059   50965 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0612 21:04:20.409068   50965 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0612 21:04:20.409073   50965 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0612 21:04:20.409078   50965 command_runner.go:130] > #
	I0612 21:04:20.409083   50965 command_runner.go:130] > # Using the seccomp notifier feature:
	I0612 21:04:20.409086   50965 command_runner.go:130] > #
	I0612 21:04:20.409092   50965 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0612 21:04:20.409100   50965 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0612 21:04:20.409103   50965 command_runner.go:130] > #
	I0612 21:04:20.409109   50965 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0612 21:04:20.409117   50965 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0612 21:04:20.409120   50965 command_runner.go:130] > #
	I0612 21:04:20.409126   50965 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0612 21:04:20.409132   50965 command_runner.go:130] > # feature.
	I0612 21:04:20.409135   50965 command_runner.go:130] > #
	I0612 21:04:20.409140   50965 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0612 21:04:20.409146   50965 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0612 21:04:20.409152   50965 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0612 21:04:20.409161   50965 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0612 21:04:20.409166   50965 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0612 21:04:20.409172   50965 command_runner.go:130] > #
	I0612 21:04:20.409178   50965 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0612 21:04:20.409186   50965 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0612 21:04:20.409190   50965 command_runner.go:130] > #
	I0612 21:04:20.409195   50965 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0612 21:04:20.409203   50965 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0612 21:04:20.409206   50965 command_runner.go:130] > #
	I0612 21:04:20.409212   50965 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0612 21:04:20.409220   50965 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0612 21:04:20.409223   50965 command_runner.go:130] > # limitation.
	I0612 21:04:20.409235   50965 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0612 21:04:20.409242   50965 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0612 21:04:20.409245   50965 command_runner.go:130] > runtime_type = "oci"
	I0612 21:04:20.409249   50965 command_runner.go:130] > runtime_root = "/run/runc"
	I0612 21:04:20.409259   50965 command_runner.go:130] > runtime_config_path = ""
	I0612 21:04:20.409264   50965 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0612 21:04:20.409270   50965 command_runner.go:130] > monitor_cgroup = "pod"
	I0612 21:04:20.409274   50965 command_runner.go:130] > monitor_exec_cgroup = ""
	I0612 21:04:20.409279   50965 command_runner.go:130] > monitor_env = [
	I0612 21:04:20.409284   50965 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0612 21:04:20.409288   50965 command_runner.go:130] > ]
	I0612 21:04:20.409293   50965 command_runner.go:130] > privileged_without_host_devices = false
	I0612 21:04:20.409301   50965 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0612 21:04:20.409306   50965 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0612 21:04:20.409318   50965 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0612 21:04:20.409328   50965 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0612 21:04:20.409335   50965 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0612 21:04:20.409345   50965 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0612 21:04:20.409353   50965 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0612 21:04:20.409363   50965 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0612 21:04:20.409368   50965 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0612 21:04:20.409374   50965 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0612 21:04:20.409378   50965 command_runner.go:130] > # Example:
	I0612 21:04:20.409381   50965 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0612 21:04:20.409385   50965 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0612 21:04:20.409390   50965 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0612 21:04:20.409394   50965 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0612 21:04:20.409397   50965 command_runner.go:130] > # cpuset = 0
	I0612 21:04:20.409401   50965 command_runner.go:130] > # cpushares = "0-1"
	I0612 21:04:20.409404   50965 command_runner.go:130] > # Where:
	I0612 21:04:20.409408   50965 command_runner.go:130] > # The workload name is workload-type.
	I0612 21:04:20.409415   50965 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0612 21:04:20.409420   50965 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0612 21:04:20.409425   50965 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0612 21:04:20.409432   50965 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0612 21:04:20.409437   50965 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0612 21:04:20.409450   50965 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0612 21:04:20.409456   50965 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0612 21:04:20.409460   50965 command_runner.go:130] > # Default value is set to true
	I0612 21:04:20.409464   50965 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0612 21:04:20.409469   50965 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0612 21:04:20.409473   50965 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0612 21:04:20.409477   50965 command_runner.go:130] > # Default value is set to 'false'
	I0612 21:04:20.409481   50965 command_runner.go:130] > # disable_hostport_mapping = false
	I0612 21:04:20.409487   50965 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0612 21:04:20.409489   50965 command_runner.go:130] > #
	I0612 21:04:20.409495   50965 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0612 21:04:20.409502   50965 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0612 21:04:20.409508   50965 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0612 21:04:20.409514   50965 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0612 21:04:20.409519   50965 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0612 21:04:20.409522   50965 command_runner.go:130] > [crio.image]
	I0612 21:04:20.409528   50965 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0612 21:04:20.409532   50965 command_runner.go:130] > # default_transport = "docker://"
	I0612 21:04:20.409537   50965 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0612 21:04:20.409543   50965 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0612 21:04:20.409549   50965 command_runner.go:130] > # global_auth_file = ""
	I0612 21:04:20.409553   50965 command_runner.go:130] > # The image used to instantiate infra containers.
	I0612 21:04:20.409558   50965 command_runner.go:130] > # This option supports live configuration reload.
	I0612 21:04:20.409562   50965 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0612 21:04:20.409568   50965 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0612 21:04:20.409575   50965 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0612 21:04:20.409580   50965 command_runner.go:130] > # This option supports live configuration reload.
	I0612 21:04:20.409587   50965 command_runner.go:130] > # pause_image_auth_file = ""
	I0612 21:04:20.409592   50965 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0612 21:04:20.409599   50965 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0612 21:04:20.409604   50965 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0612 21:04:20.409612   50965 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0612 21:04:20.409616   50965 command_runner.go:130] > # pause_command = "/pause"
	I0612 21:04:20.409624   50965 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0612 21:04:20.409630   50965 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0612 21:04:20.409637   50965 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0612 21:04:20.409647   50965 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0612 21:04:20.409654   50965 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0612 21:04:20.409659   50965 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0612 21:04:20.409665   50965 command_runner.go:130] > # pinned_images = [
	I0612 21:04:20.409669   50965 command_runner.go:130] > # ]
	I0612 21:04:20.409675   50965 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0612 21:04:20.409682   50965 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0612 21:04:20.409688   50965 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0612 21:04:20.409696   50965 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0612 21:04:20.409701   50965 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0612 21:04:20.409707   50965 command_runner.go:130] > # signature_policy = ""
	I0612 21:04:20.409712   50965 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0612 21:04:20.409721   50965 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0612 21:04:20.409728   50965 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0612 21:04:20.409736   50965 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0612 21:04:20.409741   50965 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0612 21:04:20.409748   50965 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0612 21:04:20.409754   50965 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0612 21:04:20.409762   50965 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0612 21:04:20.409766   50965 command_runner.go:130] > # changing them here.
	I0612 21:04:20.409772   50965 command_runner.go:130] > # insecure_registries = [
	I0612 21:04:20.409776   50965 command_runner.go:130] > # ]
	I0612 21:04:20.409785   50965 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0612 21:04:20.409789   50965 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0612 21:04:20.409799   50965 command_runner.go:130] > # image_volumes = "mkdir"
	I0612 21:04:20.409806   50965 command_runner.go:130] > # Temporary directory to use for storing big files
	I0612 21:04:20.409810   50965 command_runner.go:130] > # big_files_temporary_dir = ""
	I0612 21:04:20.409819   50965 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0612 21:04:20.409823   50965 command_runner.go:130] > # CNI plugins.
	I0612 21:04:20.409828   50965 command_runner.go:130] > [crio.network]
	I0612 21:04:20.409835   50965 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0612 21:04:20.409842   50965 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0612 21:04:20.409846   50965 command_runner.go:130] > # cni_default_network = ""
	I0612 21:04:20.409853   50965 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0612 21:04:20.409858   50965 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0612 21:04:20.409865   50965 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0612 21:04:20.409874   50965 command_runner.go:130] > # plugin_dirs = [
	I0612 21:04:20.409880   50965 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0612 21:04:20.409883   50965 command_runner.go:130] > # ]
	I0612 21:04:20.409888   50965 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0612 21:04:20.409893   50965 command_runner.go:130] > [crio.metrics]
	I0612 21:04:20.409897   50965 command_runner.go:130] > # Globally enable or disable metrics support.
	I0612 21:04:20.409902   50965 command_runner.go:130] > enable_metrics = true
	I0612 21:04:20.409906   50965 command_runner.go:130] > # Specify enabled metrics collectors.
	I0612 21:04:20.409911   50965 command_runner.go:130] > # Per default all metrics are enabled.
	I0612 21:04:20.409916   50965 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0612 21:04:20.409925   50965 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0612 21:04:20.409930   50965 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0612 21:04:20.409936   50965 command_runner.go:130] > # metrics_collectors = [
	I0612 21:04:20.409940   50965 command_runner.go:130] > # 	"operations",
	I0612 21:04:20.409944   50965 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0612 21:04:20.409951   50965 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0612 21:04:20.409955   50965 command_runner.go:130] > # 	"operations_errors",
	I0612 21:04:20.409961   50965 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0612 21:04:20.409965   50965 command_runner.go:130] > # 	"image_pulls_by_name",
	I0612 21:04:20.409970   50965 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0612 21:04:20.409974   50965 command_runner.go:130] > # 	"image_pulls_failures",
	I0612 21:04:20.409980   50965 command_runner.go:130] > # 	"image_pulls_successes",
	I0612 21:04:20.409984   50965 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0612 21:04:20.409988   50965 command_runner.go:130] > # 	"image_layer_reuse",
	I0612 21:04:20.409994   50965 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0612 21:04:20.409999   50965 command_runner.go:130] > # 	"containers_oom_total",
	I0612 21:04:20.410004   50965 command_runner.go:130] > # 	"containers_oom",
	I0612 21:04:20.410008   50965 command_runner.go:130] > # 	"processes_defunct",
	I0612 21:04:20.410012   50965 command_runner.go:130] > # 	"operations_total",
	I0612 21:04:20.410016   50965 command_runner.go:130] > # 	"operations_latency_seconds",
	I0612 21:04:20.410022   50965 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0612 21:04:20.410026   50965 command_runner.go:130] > # 	"operations_errors_total",
	I0612 21:04:20.410032   50965 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0612 21:04:20.410036   50965 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0612 21:04:20.410043   50965 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0612 21:04:20.410048   50965 command_runner.go:130] > # 	"image_pulls_success_total",
	I0612 21:04:20.410056   50965 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0612 21:04:20.410063   50965 command_runner.go:130] > # 	"containers_oom_count_total",
	I0612 21:04:20.410068   50965 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0612 21:04:20.410074   50965 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0612 21:04:20.410077   50965 command_runner.go:130] > # ]
	I0612 21:04:20.410083   50965 command_runner.go:130] > # The port on which the metrics server will listen.
	I0612 21:04:20.410089   50965 command_runner.go:130] > # metrics_port = 9090
	I0612 21:04:20.410093   50965 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0612 21:04:20.410103   50965 command_runner.go:130] > # metrics_socket = ""
	I0612 21:04:20.410110   50965 command_runner.go:130] > # The certificate for the secure metrics server.
	I0612 21:04:20.410116   50965 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0612 21:04:20.410124   50965 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0612 21:04:20.410129   50965 command_runner.go:130] > # certificate on any modification event.
	I0612 21:04:20.410135   50965 command_runner.go:130] > # metrics_cert = ""
	I0612 21:04:20.410140   50965 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0612 21:04:20.410147   50965 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0612 21:04:20.410151   50965 command_runner.go:130] > # metrics_key = ""
	I0612 21:04:20.410158   50965 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0612 21:04:20.410162   50965 command_runner.go:130] > [crio.tracing]
	I0612 21:04:20.410169   50965 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0612 21:04:20.410173   50965 command_runner.go:130] > # enable_tracing = false
	I0612 21:04:20.410181   50965 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0612 21:04:20.410185   50965 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0612 21:04:20.410194   50965 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0612 21:04:20.410198   50965 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0612 21:04:20.410204   50965 command_runner.go:130] > # CRI-O NRI configuration.
	I0612 21:04:20.410208   50965 command_runner.go:130] > [crio.nri]
	I0612 21:04:20.410212   50965 command_runner.go:130] > # Globally enable or disable NRI.
	I0612 21:04:20.410217   50965 command_runner.go:130] > # enable_nri = false
	I0612 21:04:20.410221   50965 command_runner.go:130] > # NRI socket to listen on.
	I0612 21:04:20.410228   50965 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0612 21:04:20.410233   50965 command_runner.go:130] > # NRI plugin directory to use.
	I0612 21:04:20.410239   50965 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0612 21:04:20.410244   50965 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0612 21:04:20.410249   50965 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0612 21:04:20.410257   50965 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0612 21:04:20.410270   50965 command_runner.go:130] > # nri_disable_connections = false
	I0612 21:04:20.410278   50965 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0612 21:04:20.410282   50965 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0612 21:04:20.410287   50965 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0612 21:04:20.410294   50965 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0612 21:04:20.410299   50965 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0612 21:04:20.410303   50965 command_runner.go:130] > [crio.stats]
	I0612 21:04:20.410308   50965 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0612 21:04:20.410316   50965 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0612 21:04:20.410320   50965 command_runner.go:130] > # stats_collection_period = 0
	I0612 21:04:20.410475   50965 cni.go:84] Creating CNI manager for ""
	I0612 21:04:20.410488   50965 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0612 21:04:20.410501   50965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:04:20.410527   50965 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-991051 NodeName:multinode-991051 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:04:20.410652   50965 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-991051"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:04:20.410718   50965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:04:20.424154   50965 command_runner.go:130] > kubeadm
	I0612 21:04:20.424176   50965 command_runner.go:130] > kubectl
	I0612 21:04:20.424183   50965 command_runner.go:130] > kubelet
	I0612 21:04:20.424204   50965 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:04:20.424295   50965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:04:20.436620   50965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0612 21:04:20.455596   50965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:04:20.473711   50965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0612 21:04:20.491616   50965 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0612 21:04:20.495723   50965 command_runner.go:130] > 192.168.39.222	control-plane.minikube.internal
	I0612 21:04:20.495895   50965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:04:20.643275   50965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:04:20.659243   50965 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051 for IP: 192.168.39.222
	I0612 21:04:20.659263   50965 certs.go:194] generating shared ca certs ...
	I0612 21:04:20.659289   50965 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:04:20.659489   50965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:04:20.659544   50965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:04:20.659557   50965 certs.go:256] generating profile certs ...
	I0612 21:04:20.659677   50965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/client.key
	I0612 21:04:20.659764   50965 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/apiserver.key.36fb12b1
	I0612 21:04:20.659824   50965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/proxy-client.key
	I0612 21:04:20.659839   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 21:04:20.659858   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0612 21:04:20.659875   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 21:04:20.659891   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 21:04:20.659906   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 21:04:20.659925   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 21:04:20.659942   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 21:04:20.659959   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 21:04:20.660033   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:04:20.660067   50965 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:04:20.660077   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:04:20.660109   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:04:20.660139   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:04:20.660170   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:04:20.660224   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:04:20.660265   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /usr/share/ca-certificates/214442.pem
	I0612 21:04:20.660294   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:04:20.660314   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem -> /usr/share/ca-certificates/21444.pem
	I0612 21:04:20.661120   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:04:20.688195   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:04:20.713286   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:04:20.738379   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:04:20.762997   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 21:04:20.787921   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 21:04:20.812579   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:04:20.837459   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:04:20.862646   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:04:20.889791   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:04:20.915765   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:04:20.941067   50965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:04:20.958214   50965 ssh_runner.go:195] Run: openssl version
	I0612 21:04:20.964639   50965 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0612 21:04:20.964735   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:04:20.976733   50965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:04:20.981426   50965 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:04:20.981473   50965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:04:20.981509   50965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:04:20.987568   50965 command_runner.go:130] > 51391683
	I0612 21:04:20.987639   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:04:20.997654   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:04:21.008966   50965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:04:21.013316   50965 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:04:21.013357   50965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:04:21.013395   50965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:04:21.019000   50965 command_runner.go:130] > 3ec20f2e
	I0612 21:04:21.019071   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:04:21.029206   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:04:21.040649   50965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:04:21.045322   50965 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:04:21.045354   50965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:04:21.045394   50965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:04:21.050928   50965 command_runner.go:130] > b5213941
	I0612 21:04:21.051290   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:04:21.061315   50965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:04:21.065905   50965 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:04:21.065934   50965 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0612 21:04:21.065942   50965 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0612 21:04:21.065951   50965 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0612 21:04:21.065960   50965 command_runner.go:130] > Access: 2024-06-12 20:58:00.839277397 +0000
	I0612 21:04:21.065966   50965 command_runner.go:130] > Modify: 2024-06-12 20:58:00.839277397 +0000
	I0612 21:04:21.065973   50965 command_runner.go:130] > Change: 2024-06-12 20:58:00.839277397 +0000
	I0612 21:04:21.065981   50965 command_runner.go:130] >  Birth: 2024-06-12 20:58:00.839277397 +0000
	I0612 21:04:21.066084   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:04:21.071946   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.072024   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:04:21.077561   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.077803   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:04:21.083525   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.083583   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:04:21.089340   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.089390   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:04:21.094951   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.094996   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:04:21.100458   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.100625   50965 kubeadm.go:391] StartCluster: {Name:multinode-991051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-991051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:04:21.100811   50965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:04:21.100867   50965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:04:21.141500   50965 command_runner.go:130] > 55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a
	I0612 21:04:21.141534   50965 command_runner.go:130] > 5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13
	I0612 21:04:21.141544   50965 command_runner.go:130] > 98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c
	I0612 21:04:21.141553   50965 command_runner.go:130] > 2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906
	I0612 21:04:21.141562   50965 command_runner.go:130] > e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf
	I0612 21:04:21.141571   50965 command_runner.go:130] > 3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce
	I0612 21:04:21.141580   50965 command_runner.go:130] > 3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85
	I0612 21:04:21.141591   50965 command_runner.go:130] > 40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0
	I0612 21:04:21.141626   50965 cri.go:89] found id: "55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a"
	I0612 21:04:21.141646   50965 cri.go:89] found id: "5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13"
	I0612 21:04:21.141651   50965 cri.go:89] found id: "98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c"
	I0612 21:04:21.141655   50965 cri.go:89] found id: "2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906"
	I0612 21:04:21.141658   50965 cri.go:89] found id: "e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf"
	I0612 21:04:21.141661   50965 cri.go:89] found id: "3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce"
	I0612 21:04:21.141681   50965 cri.go:89] found id: "3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85"
	I0612 21:04:21.141688   50965 cri.go:89] found id: "40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0"
	I0612 21:04:21.141690   50965 cri.go:89] found id: ""
	I0612 21:04:21.141740   50965 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.805230339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fdda236-b8e6-46e3-9070-7d5309a21be1 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.806191108Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be9dc329-92d3-4b97-b095-b7d52b1488fb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.806635391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718226345806613504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be9dc329-92d3-4b97-b095-b7d52b1488fb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.807067369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62497906-b48b-45a3-81f4-f32912b41ee0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.807186650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62497906-b48b-45a3-81f4-f32912b41ee0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.807515928Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e356af2991acd35e8c5e1010c2edcfafcdaa44202f7a7de1f64fdcb129b1cb97,PodSandboxId:2765d8d89dc60b11465338bb625cf83233ae0c47977122526dda4e2c3eb8de0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718226302132060909,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e15df1e1c381db7fd134e2b814595d42af6ae8a54981cc908a49c53c4a1bb9,PodSandboxId:9f85e7d9a139355a4d11c93ef8d33423360aa0622ee97fb6e5a8846239efb0c1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718226268653878749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00d5ce7c2be9f85077bc8e0388d9fa32ba1bda0561e11f78b247f01d99da3d6,PodSandboxId:d5431b2fdc6ccb5032e7e75dee7a0bcdc31ee038c99fa77cb164f49b50497852,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718226268505600075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48723a00f68034b2b9157bc84da729cb2ba5698b870150e02f80d3c7e1621aae,PodSandboxId:835ea78f2a30143650553e31633dddd64d8b30b2506bed7f27aea0cb8bf3a695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718226268445869395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1282d310fbf74c5662700466fab7cb94876f3856f4651e5f83284f2361bd8724,PodSandboxId:2377cfd1f1177a0030b2481ab2e4ad7abf93a225046e0458e3e1fbb8b2a3da91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718226268350438967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f189b9415f0984871a7f457c39dda70e32109051b8c0727a20cbd483bb4e9c8c,PodSandboxId:34f76e53eae69485c9673bb9813104abd4aeabf142b53b0ee79e0f471b99cc02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718226263608310806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5a0dd458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeba3ac7698f6380d6082ee5c673f572a710e176fb3a3d5dc6b43dfb7bb4130c,PodSandboxId:b9069b6210b629e3d6551e12622c1deadccfb1e5282b8305a936196343dc7e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718226263537240543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]string{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01295b32b6815016713b036abc654cee51e14f9aba50c15ab21f991e5ea1bac3,PodSandboxId:b1f7f88ba9fcf4d8c5436e7c2b210e62b6b270bd56b89dd703af6152ddf286a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718226263492477646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467c4660de162c74d8bc29ebfdaebba7594ac023fa9d24a9cf66e9bbf967f960,PodSandboxId:b52346ae1ba5d421641ac822dcfa3dad8012e8185faab1a12b5318e8e6d999d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718226263472607810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3e772291419e269d7d384e11d755c67e1382c12181adfa9479ea1f2d722dee,PodSandboxId:27760f2e721b4cabd837e0e00013bfb9abfc74b3640aebacc2c0dc2c6f63291d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718225956372753821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a,PodSandboxId:747bf00d4dc3c16fb5474ececcbda50427fd76c65921e57da775b0343ac22a12,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718225909231185107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13,PodSandboxId:28629b256cb1868a1ac54f06575a79eb7f183a2451add37a0f1a6b4c33e855cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718225909175177713,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.kubernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c,PodSandboxId:b8a668a1284ee4597b6e3789502bc8ef03720dd341f92e3ea388cf996f7b0a4a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718225907816411305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906,PodSandboxId:b5f91e0ef8f81e93939cf8164692777100c10dfb5084c1d282f6a852c7a5d430,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718225904083066654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce,PodSandboxId:e5bd299b8eaf1d06cd44d6ddacc1fef873a8b45636dd63f5e5b6848973158413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718225884347734931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf,PodSandboxId:6b5d45256b5732bcdd42f67c430b771dea6e25a6c3d5530705a5543d4904e0f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718225884354541310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc
5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.container.hash: 5a0dd458,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0,PodSandboxId:b09fab012871969680122263723e9e7810048137c6bd1be1640fe928263093cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718225884317379092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85,PodSandboxId:cb24e84e5e4faef0a1d547f72a678c388fb91c90f9b6b7e8fd8e07a31043ca75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718225884330858527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62497906-b48b-45a3-81f4-f32912b41ee0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.845276966Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ffcdeb2f-d673-4f8d-b70f-9bf530cd7640 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.845801506Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2765d8d89dc60b11465338bb625cf83233ae0c47977122526dda4e2c3eb8de0a,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-846cm,Uid:8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718226301984024387,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T21:04:27.832732135Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d5431b2fdc6ccb5032e7e75dee7a0bcdc31ee038c99fa77cb164f49b50497852,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bfxk2,Uid:5a2029f4-e926-41da-8fbc-b6cf94d25ad9,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1718226268232664088,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T21:04:27.832733542Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2377cfd1f1177a0030b2481ab2e4ad7abf93a225046e0458e3e1fbb8b2a3da91,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1da33189-d542-48a2-a11a-67720a303a16,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718226268177049921,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-12T21:04:27.832738431Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:835ea78f2a30143650553e31633dddd64d8b30b2506bed7f27aea0cb8bf3a695,Metadata:&PodSandboxMetadata{Name:kube-proxy-nqg55,Uid:2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1718226268169482333,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T21:04:27.832735873Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f85e7d9a139355a4d11c93ef8d33423360aa0622ee97fb6e5a8846239efb0c1,Metadata:&PodSandboxMetadata{Name:kindnet-f72hp,Uid:d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718226268165207132,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,k8s-app: kindnet,pod-template-generat
ion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T21:04:27.832723962Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34f76e53eae69485c9673bb9813104abd4aeabf142b53b0ee79e0f471b99cc02,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-991051,Uid:aac465b2fbc69d8dc5f521a4275b2a26,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718226263334980838,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc5f521a4275b2a26,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.222:8443,kubernetes.io/config.hash: aac465b2fbc69d8dc5f521a4275b2a26,kubernetes.io/config.seen: 2024-06-12T21:04:22.835448518Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b9069b6210b629e3d6551e12622c1deadc
cfb1e5282b8305a936196343dc7e79,Metadata:&PodSandboxMetadata{Name:etcd-multinode-991051,Uid:4e0a810eaa25137a02b499d4ae5d28e9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718226263328317427,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.222:2379,kubernetes.io/config.hash: 4e0a810eaa25137a02b499d4ae5d28e9,kubernetes.io/config.seen: 2024-06-12T21:04:22.835444029Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b1f7f88ba9fcf4d8c5436e7c2b210e62b6b270bd56b89dd703af6152ddf286a4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-991051,Uid:87f66a3a9f00e1fa2e05a8b5d9d430ad,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718226263303868587,Labels:map[string]st
ring{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 87f66a3a9f00e1fa2e05a8b5d9d430ad,kubernetes.io/config.seen: 2024-06-12T21:04:22.835449821Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b52346ae1ba5d421641ac822dcfa3dad8012e8185faab1a12b5318e8e6d999d2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-991051,Uid:77969bba38d22785253409acfd4d32bf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718226263302257019,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77969bba38d22785253409acfd4d32bf,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: 77969bba38d22785253409acfd4d32bf,kubernetes.io/config.seen: 2024-06-12T21:04:22.835451097Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:27760f2e721b4cabd837e0e00013bfb9abfc74b3640aebacc2c0dc2c6f63291d,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-846cm,Uid:8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718225953691926635,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:59:11.881026939Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:747bf00d4dc3c16fb5474ececcbda50427fd76c65921e57da775b0343ac22a12,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bfxk2,Uid:5a2029f4-e926-41da-8fbc-b6cf94d25ad9,Namespace:kube-system,Attemp
t:0,},State:SANDBOX_NOTREADY,CreatedAt:1718225909044538230,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:58:28.736520108Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:28629b256cb1868a1ac54f06575a79eb7f183a2451add37a0f1a6b4c33e855cd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1da33189-d542-48a2-a11a-67720a303a16,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718225909042784793,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[
string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-12T20:58:28.730880853Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b8a668a1284ee4597b6e3789502bc8ef03720dd341f92e3ea388cf996f7b0a4a,Metadata:&PodSandboxMetadata{Name:kindnet-f72hp,Uid:d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718225903781503730,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:58:23.473217388Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5f91e0ef8f81e93939cf8164692777100c10dfb5084c1d282f6a852c7a5d430,Metadata:&PodSandboxMetadata{Name:kube-proxy-nqg55,Uid:2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718225903765776224,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,k8s-app: kub
e-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T20:58:23.450151066Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b5d45256b5732bcdd42f67c430b771dea6e25a6c3d5530705a5543d4904e0f0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-991051,Uid:aac465b2fbc69d8dc5f521a4275b2a26,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718225884153739126,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc5f521a4275b2a26,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.222:8443,kubernetes.io/config.hash: aac465b2fbc69d8dc5f521a4275b2a26,kubernetes.io/config.seen: 2024-06-12T20:58:03.648291811Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cb24e84e5e4fae
f0a1d547f72a678c388fb91c90f9b6b7e8fd8e07a31043ca75,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-991051,Uid:87f66a3a9f00e1fa2e05a8b5d9d430ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718225884129869186,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 87f66a3a9f00e1fa2e05a8b5d9d430ad,kubernetes.io/config.seen: 2024-06-12T20:58:03.648293020Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b09fab012871969680122263723e9e7810048137c6bd1be1640fe928263093cf,Metadata:&PodSandboxMetadata{Name:etcd-multinode-991051,Uid:4e0a810eaa25137a02b499d4ae5d28e9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718225884127884989,Labels:map[string]string{component
: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.222:2379,kubernetes.io/config.hash: 4e0a810eaa25137a02b499d4ae5d28e9,kubernetes.io/config.seen: 2024-06-12T20:58:03.648286324Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e5bd299b8eaf1d06cd44d6ddacc1fef873a8b45636dd63f5e5b6848973158413,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-991051,Uid:77969bba38d22785253409acfd4d32bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718225884116729485,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77969bba38d22785253409acfd4d32bf,tier: control-plane,},Annotati
ons:map[string]string{kubernetes.io/config.hash: 77969bba38d22785253409acfd4d32bf,kubernetes.io/config.seen: 2024-06-12T20:58:03.648294020Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ffcdeb2f-d673-4f8d-b70f-9bf530cd7640 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.847266431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbaa108d-560f-4fcb-afa8-610ff105bc80 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.847350441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbaa108d-560f-4fcb-afa8-610ff105bc80 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.847707461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e356af2991acd35e8c5e1010c2edcfafcdaa44202f7a7de1f64fdcb129b1cb97,PodSandboxId:2765d8d89dc60b11465338bb625cf83233ae0c47977122526dda4e2c3eb8de0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718226302132060909,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e15df1e1c381db7fd134e2b814595d42af6ae8a54981cc908a49c53c4a1bb9,PodSandboxId:9f85e7d9a139355a4d11c93ef8d33423360aa0622ee97fb6e5a8846239efb0c1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718226268653878749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00d5ce7c2be9f85077bc8e0388d9fa32ba1bda0561e11f78b247f01d99da3d6,PodSandboxId:d5431b2fdc6ccb5032e7e75dee7a0bcdc31ee038c99fa77cb164f49b50497852,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718226268505600075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48723a00f68034b2b9157bc84da729cb2ba5698b870150e02f80d3c7e1621aae,PodSandboxId:835ea78f2a30143650553e31633dddd64d8b30b2506bed7f27aea0cb8bf3a695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718226268445869395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1282d310fbf74c5662700466fab7cb94876f3856f4651e5f83284f2361bd8724,PodSandboxId:2377cfd1f1177a0030b2481ab2e4ad7abf93a225046e0458e3e1fbb8b2a3da91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718226268350438967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f189b9415f0984871a7f457c39dda70e32109051b8c0727a20cbd483bb4e9c8c,PodSandboxId:34f76e53eae69485c9673bb9813104abd4aeabf142b53b0ee79e0f471b99cc02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718226263608310806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5a0dd458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeba3ac7698f6380d6082ee5c673f572a710e176fb3a3d5dc6b43dfb7bb4130c,PodSandboxId:b9069b6210b629e3d6551e12622c1deadccfb1e5282b8305a936196343dc7e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718226263537240543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]string{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01295b32b6815016713b036abc654cee51e14f9aba50c15ab21f991e5ea1bac3,PodSandboxId:b1f7f88ba9fcf4d8c5436e7c2b210e62b6b270bd56b89dd703af6152ddf286a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718226263492477646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467c4660de162c74d8bc29ebfdaebba7594ac023fa9d24a9cf66e9bbf967f960,PodSandboxId:b52346ae1ba5d421641ac822dcfa3dad8012e8185faab1a12b5318e8e6d999d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718226263472607810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3e772291419e269d7d384e11d755c67e1382c12181adfa9479ea1f2d722dee,PodSandboxId:27760f2e721b4cabd837e0e00013bfb9abfc74b3640aebacc2c0dc2c6f63291d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718225956372753821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a,PodSandboxId:747bf00d4dc3c16fb5474ececcbda50427fd76c65921e57da775b0343ac22a12,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718225909231185107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13,PodSandboxId:28629b256cb1868a1ac54f06575a79eb7f183a2451add37a0f1a6b4c33e855cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718225909175177713,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.kubernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c,PodSandboxId:b8a668a1284ee4597b6e3789502bc8ef03720dd341f92e3ea388cf996f7b0a4a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718225907816411305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906,PodSandboxId:b5f91e0ef8f81e93939cf8164692777100c10dfb5084c1d282f6a852c7a5d430,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718225904083066654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce,PodSandboxId:e5bd299b8eaf1d06cd44d6ddacc1fef873a8b45636dd63f5e5b6848973158413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718225884347734931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf,PodSandboxId:6b5d45256b5732bcdd42f67c430b771dea6e25a6c3d5530705a5543d4904e0f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718225884354541310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc
5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.container.hash: 5a0dd458,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0,PodSandboxId:b09fab012871969680122263723e9e7810048137c6bd1be1640fe928263093cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718225884317379092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85,PodSandboxId:cb24e84e5e4faef0a1d547f72a678c388fb91c90f9b6b7e8fd8e07a31043ca75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718225884330858527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbaa108d-560f-4fcb-afa8-610ff105bc80 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.851081485Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c6a28ef-8426-4606-afb8-5dd517edd98b name=/runtime.v1.RuntimeService/Version
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.851186380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c6a28ef-8426-4606-afb8-5dd517edd98b name=/runtime.v1.RuntimeService/Version
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.852721136Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a20731cc-0855-48e8-bd9f-21f8a2e24e17 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.853240040Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718226345853217041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a20731cc-0855-48e8-bd9f-21f8a2e24e17 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.853681225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5269a850-45e8-4d2e-99ce-896c4872e1b2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.853751993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5269a850-45e8-4d2e-99ce-896c4872e1b2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.854187339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e356af2991acd35e8c5e1010c2edcfafcdaa44202f7a7de1f64fdcb129b1cb97,PodSandboxId:2765d8d89dc60b11465338bb625cf83233ae0c47977122526dda4e2c3eb8de0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718226302132060909,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e15df1e1c381db7fd134e2b814595d42af6ae8a54981cc908a49c53c4a1bb9,PodSandboxId:9f85e7d9a139355a4d11c93ef8d33423360aa0622ee97fb6e5a8846239efb0c1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718226268653878749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00d5ce7c2be9f85077bc8e0388d9fa32ba1bda0561e11f78b247f01d99da3d6,PodSandboxId:d5431b2fdc6ccb5032e7e75dee7a0bcdc31ee038c99fa77cb164f49b50497852,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718226268505600075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48723a00f68034b2b9157bc84da729cb2ba5698b870150e02f80d3c7e1621aae,PodSandboxId:835ea78f2a30143650553e31633dddd64d8b30b2506bed7f27aea0cb8bf3a695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718226268445869395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1282d310fbf74c5662700466fab7cb94876f3856f4651e5f83284f2361bd8724,PodSandboxId:2377cfd1f1177a0030b2481ab2e4ad7abf93a225046e0458e3e1fbb8b2a3da91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718226268350438967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f189b9415f0984871a7f457c39dda70e32109051b8c0727a20cbd483bb4e9c8c,PodSandboxId:34f76e53eae69485c9673bb9813104abd4aeabf142b53b0ee79e0f471b99cc02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718226263608310806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5a0dd458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeba3ac7698f6380d6082ee5c673f572a710e176fb3a3d5dc6b43dfb7bb4130c,PodSandboxId:b9069b6210b629e3d6551e12622c1deadccfb1e5282b8305a936196343dc7e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718226263537240543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]string{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01295b32b6815016713b036abc654cee51e14f9aba50c15ab21f991e5ea1bac3,PodSandboxId:b1f7f88ba9fcf4d8c5436e7c2b210e62b6b270bd56b89dd703af6152ddf286a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718226263492477646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467c4660de162c74d8bc29ebfdaebba7594ac023fa9d24a9cf66e9bbf967f960,PodSandboxId:b52346ae1ba5d421641ac822dcfa3dad8012e8185faab1a12b5318e8e6d999d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718226263472607810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3e772291419e269d7d384e11d755c67e1382c12181adfa9479ea1f2d722dee,PodSandboxId:27760f2e721b4cabd837e0e00013bfb9abfc74b3640aebacc2c0dc2c6f63291d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718225956372753821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a,PodSandboxId:747bf00d4dc3c16fb5474ececcbda50427fd76c65921e57da775b0343ac22a12,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718225909231185107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13,PodSandboxId:28629b256cb1868a1ac54f06575a79eb7f183a2451add37a0f1a6b4c33e855cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718225909175177713,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.kubernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c,PodSandboxId:b8a668a1284ee4597b6e3789502bc8ef03720dd341f92e3ea388cf996f7b0a4a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718225907816411305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906,PodSandboxId:b5f91e0ef8f81e93939cf8164692777100c10dfb5084c1d282f6a852c7a5d430,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718225904083066654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce,PodSandboxId:e5bd299b8eaf1d06cd44d6ddacc1fef873a8b45636dd63f5e5b6848973158413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718225884347734931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf,PodSandboxId:6b5d45256b5732bcdd42f67c430b771dea6e25a6c3d5530705a5543d4904e0f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718225884354541310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc
5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.container.hash: 5a0dd458,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0,PodSandboxId:b09fab012871969680122263723e9e7810048137c6bd1be1640fe928263093cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718225884317379092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85,PodSandboxId:cb24e84e5e4faef0a1d547f72a678c388fb91c90f9b6b7e8fd8e07a31043ca75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718225884330858527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5269a850-45e8-4d2e-99ce-896c4872e1b2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.897352109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f5acd39-16a0-4649-acfc-5492c7169577 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.897426764Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f5acd39-16a0-4649-acfc-5492c7169577 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.899008491Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e9d735d-a537-42af-8e2b-534931bf8c4f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.899539242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718226345899514996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e9d735d-a537-42af-8e2b-534931bf8c4f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.900051219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=396ad1a7-0ea0-47eb-bdcb-d7099a9b9b63 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.900227588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=396ad1a7-0ea0-47eb-bdcb-d7099a9b9b63 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:05:45 multinode-991051 crio[2858]: time="2024-06-12 21:05:45.900552441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e356af2991acd35e8c5e1010c2edcfafcdaa44202f7a7de1f64fdcb129b1cb97,PodSandboxId:2765d8d89dc60b11465338bb625cf83233ae0c47977122526dda4e2c3eb8de0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718226302132060909,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e15df1e1c381db7fd134e2b814595d42af6ae8a54981cc908a49c53c4a1bb9,PodSandboxId:9f85e7d9a139355a4d11c93ef8d33423360aa0622ee97fb6e5a8846239efb0c1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718226268653878749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00d5ce7c2be9f85077bc8e0388d9fa32ba1bda0561e11f78b247f01d99da3d6,PodSandboxId:d5431b2fdc6ccb5032e7e75dee7a0bcdc31ee038c99fa77cb164f49b50497852,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718226268505600075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48723a00f68034b2b9157bc84da729cb2ba5698b870150e02f80d3c7e1621aae,PodSandboxId:835ea78f2a30143650553e31633dddd64d8b30b2506bed7f27aea0cb8bf3a695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718226268445869395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1282d310fbf74c5662700466fab7cb94876f3856f4651e5f83284f2361bd8724,PodSandboxId:2377cfd1f1177a0030b2481ab2e4ad7abf93a225046e0458e3e1fbb8b2a3da91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718226268350438967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f189b9415f0984871a7f457c39dda70e32109051b8c0727a20cbd483bb4e9c8c,PodSandboxId:34f76e53eae69485c9673bb9813104abd4aeabf142b53b0ee79e0f471b99cc02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718226263608310806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5a0dd458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeba3ac7698f6380d6082ee5c673f572a710e176fb3a3d5dc6b43dfb7bb4130c,PodSandboxId:b9069b6210b629e3d6551e12622c1deadccfb1e5282b8305a936196343dc7e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718226263537240543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]string{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01295b32b6815016713b036abc654cee51e14f9aba50c15ab21f991e5ea1bac3,PodSandboxId:b1f7f88ba9fcf4d8c5436e7c2b210e62b6b270bd56b89dd703af6152ddf286a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718226263492477646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467c4660de162c74d8bc29ebfdaebba7594ac023fa9d24a9cf66e9bbf967f960,PodSandboxId:b52346ae1ba5d421641ac822dcfa3dad8012e8185faab1a12b5318e8e6d999d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718226263472607810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3e772291419e269d7d384e11d755c67e1382c12181adfa9479ea1f2d722dee,PodSandboxId:27760f2e721b4cabd837e0e00013bfb9abfc74b3640aebacc2c0dc2c6f63291d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718225956372753821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a,PodSandboxId:747bf00d4dc3c16fb5474ececcbda50427fd76c65921e57da775b0343ac22a12,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718225909231185107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13,PodSandboxId:28629b256cb1868a1ac54f06575a79eb7f183a2451add37a0f1a6b4c33e855cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718225909175177713,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.kubernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c,PodSandboxId:b8a668a1284ee4597b6e3789502bc8ef03720dd341f92e3ea388cf996f7b0a4a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718225907816411305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906,PodSandboxId:b5f91e0ef8f81e93939cf8164692777100c10dfb5084c1d282f6a852c7a5d430,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718225904083066654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce,PodSandboxId:e5bd299b8eaf1d06cd44d6ddacc1fef873a8b45636dd63f5e5b6848973158413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718225884347734931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf,PodSandboxId:6b5d45256b5732bcdd42f67c430b771dea6e25a6c3d5530705a5543d4904e0f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718225884354541310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc
5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.container.hash: 5a0dd458,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0,PodSandboxId:b09fab012871969680122263723e9e7810048137c6bd1be1640fe928263093cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718225884317379092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85,PodSandboxId:cb24e84e5e4faef0a1d547f72a678c388fb91c90f9b6b7e8fd8e07a31043ca75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718225884330858527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=396ad1a7-0ea0-47eb-bdcb-d7099a9b9b63 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e356af2991acd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      43 seconds ago       Running             busybox                   1                   2765d8d89dc60       busybox-fc5497c4f-846cm
	46e15df1e1c38       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               1                   9f85e7d9a1393       kindnet-f72hp
	b00d5ce7c2be9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   d5431b2fdc6cc       coredns-7db6d8ff4d-bfxk2
	48723a00f6803       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      About a minute ago   Running             kube-proxy                1                   835ea78f2a301       kube-proxy-nqg55
	1282d310fbf74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   2377cfd1f1177       storage-provisioner
	f189b9415f098       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            1                   34f76e53eae69       kube-apiserver-multinode-991051
	eeba3ac7698f6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   b9069b6210b62       etcd-multinode-991051
	01295b32b6815       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   1                   b1f7f88ba9fcf       kube-controller-manager-multinode-991051
	467c4660de162       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      About a minute ago   Running             kube-scheduler            1                   b52346ae1ba5d       kube-scheduler-multinode-991051
	7a3e772291419       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   27760f2e721b4       busybox-fc5497c4f-846cm
	55c89de09a94c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   747bf00d4dc3c       coredns-7db6d8ff4d-bfxk2
	5444a9801baa4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   28629b256cb18       storage-provisioner
	98f8978fdf745       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    7 minutes ago        Exited              kindnet-cni               0                   b8a668a1284ee       kindnet-f72hp
	2388fa10173fb       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago        Exited              kube-proxy                0                   b5f91e0ef8f81       kube-proxy-nqg55
	e8bdc02b5de3e       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago        Exited              kube-apiserver            0                   6b5d45256b573       kube-apiserver-multinode-991051
	3ae9672be2634       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago        Exited              kube-scheduler            0                   e5bd299b8eaf1       kube-scheduler-multinode-991051
	3280d415399d2       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago        Exited              kube-controller-manager   0                   cb24e84e5e4fa       kube-controller-manager-multinode-991051
	40967dcc01791       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   b09fab0128719       etcd-multinode-991051
	
	
	==> coredns [55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a] <==
	[INFO] 10.244.1.2:43745 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001847823s
	[INFO] 10.244.1.2:54879 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017707s
	[INFO] 10.244.1.2:33959 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105818s
	[INFO] 10.244.1.2:48862 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001244099s
	[INFO] 10.244.1.2:40661 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118494s
	[INFO] 10.244.1.2:58412 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077076s
	[INFO] 10.244.1.2:56989 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126649s
	[INFO] 10.244.0.3:43521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205979s
	[INFO] 10.244.0.3:54272 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070708s
	[INFO] 10.244.0.3:36006 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007132s
	[INFO] 10.244.0.3:57978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008134s
	[INFO] 10.244.1.2:50155 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131674s
	[INFO] 10.244.1.2:48107 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010002s
	[INFO] 10.244.1.2:33900 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007647s
	[INFO] 10.244.1.2:50036 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083568s
	[INFO] 10.244.0.3:56545 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154865s
	[INFO] 10.244.0.3:45508 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082938s
	[INFO] 10.244.0.3:50626 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112798s
	[INFO] 10.244.0.3:60306 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097559s
	[INFO] 10.244.1.2:38281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017063s
	[INFO] 10.244.1.2:41878 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133372s
	[INFO] 10.244.1.2:48515 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138159s
	[INFO] 10.244.1.2:54207 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121398s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b00d5ce7c2be9f85077bc8e0388d9fa32ba1bda0561e11f78b247f01d99da3d6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35213 - 51609 "HINFO IN 6562696624659742763.4870241254649022123. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016738574s
	
	
	==> describe nodes <==
	Name:               multinode-991051
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-991051
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=multinode-991051
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T20_58_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:58:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-991051
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:05:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:04:27 +0000   Wed, 12 Jun 2024 20:58:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:04:27 +0000   Wed, 12 Jun 2024 20:58:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:04:27 +0000   Wed, 12 Jun 2024 20:58:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:04:27 +0000   Wed, 12 Jun 2024 20:58:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    multinode-991051
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0768626dc5484c468fac8e9844f6eea4
	  System UUID:                0768626d-c548-4c46-8fac-8e9844f6eea4
	  Boot ID:                    1c4632eb-6f97-4dc1-98a0-c709cb774373
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-846cm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 coredns-7db6d8ff4d-bfxk2                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m23s
	  kube-system                 etcd-multinode-991051                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m37s
	  kube-system                 kindnet-f72hp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m23s
	  kube-system                 kube-apiserver-multinode-991051             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-controller-manager-multinode-991051    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-proxy-nqg55                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-scheduler-multinode-991051             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m21s              kube-proxy       
	  Normal  Starting                 77s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m37s              kubelet          Node multinode-991051 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m37s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m37s              kubelet          Node multinode-991051 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s              kubelet          Node multinode-991051 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m37s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m24s              node-controller  Node multinode-991051 event: Registered Node multinode-991051 in Controller
	  Normal  NodeReady                7m18s              kubelet          Node multinode-991051 status is now: NodeReady
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)  kubelet          Node multinode-991051 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)  kubelet          Node multinode-991051 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)  kubelet          Node multinode-991051 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                node-controller  Node multinode-991051 event: Registered Node multinode-991051 in Controller
	
	
	Name:               multinode-991051-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-991051-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=multinode-991051
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T21_05_06_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:05:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-991051-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:05:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:05:36 +0000   Wed, 12 Jun 2024 21:05:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:05:36 +0000   Wed, 12 Jun 2024 21:05:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:05:36 +0000   Wed, 12 Jun 2024 21:05:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:05:36 +0000   Wed, 12 Jun 2024 21:05:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    multinode-991051-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ffbd7e2434342f89de57b022368ba2d
	  System UUID:                1ffbd7e2-4343-42f8-9de5-7b022368ba2d
	  Boot ID:                    f15c2fb9-8fa1-42c5-8627-1b04bd417ff0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-96qct    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kindnet-nhj4r              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m47s
	  kube-system                 kube-proxy-snl29           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m41s                  kube-proxy  
	  Normal  Starting                 36s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m47s (x2 over 6m47s)  kubelet     Node multinode-991051-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m47s (x2 over 6m47s)  kubelet     Node multinode-991051-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m47s (x2 over 6m47s)  kubelet     Node multinode-991051-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m47s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m37s                  kubelet     Node multinode-991051-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  40s (x2 over 40s)      kubelet     Node multinode-991051-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x2 over 40s)      kubelet     Node multinode-991051-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x2 over 40s)      kubelet     Node multinode-991051-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                32s                    kubelet     Node multinode-991051-m02 status is now: NodeReady
	
	
	Name:               multinode-991051-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-991051-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=multinode-991051
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T21_05_34_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:05:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-991051-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:05:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:05:42 +0000   Wed, 12 Jun 2024 21:05:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:05:42 +0000   Wed, 12 Jun 2024 21:05:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:05:42 +0000   Wed, 12 Jun 2024 21:05:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:05:42 +0000   Wed, 12 Jun 2024 21:05:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    multinode-991051-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9455dc38301649cebea8214256f76863
	  System UUID:                9455dc38-3016-49ce-bea8-214256f76863
	  Boot ID:                    e15cdfea-40a1-48f4-8da7-34ee86adbf1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6ds8j       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-proxy-lf7jn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m54s                  kube-proxy  
	  Normal  Starting                 7s                     kube-proxy  
	  Normal  Starting                 5m14s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m59s (x2 over 5m59s)  kubelet     Node multinode-991051-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x2 over 5m59s)  kubelet     Node multinode-991051-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x2 over 5m59s)  kubelet     Node multinode-991051-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m49s                  kubelet     Node multinode-991051-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m20s (x2 over 5m20s)  kubelet     Node multinode-991051-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m20s (x2 over 5m20s)  kubelet     Node multinode-991051-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m20s (x2 over 5m20s)  kubelet     Node multinode-991051-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m10s                  kubelet     Node multinode-991051-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet     Node multinode-991051-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet     Node multinode-991051-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet     Node multinode-991051-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-991051-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.063790] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.170111] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.146708] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.256620] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.202304] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[Jun12 20:58] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.059176] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.002826] systemd-fstab-generator[1272]: Ignoring "noauto" option for root device
	[  +0.085143] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.362582] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.752240] systemd-fstab-generator[1469]: Ignoring "noauto" option for root device
	[  +5.162304] kauditd_printk_skb: 57 callbacks suppressed
	[Jun12 20:59] kauditd_printk_skb: 15 callbacks suppressed
	[Jun12 21:04] systemd-fstab-generator[2776]: Ignoring "noauto" option for root device
	[  +0.157188] systemd-fstab-generator[2789]: Ignoring "noauto" option for root device
	[  +0.175058] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.144685] systemd-fstab-generator[2816]: Ignoring "noauto" option for root device
	[  +0.299987] systemd-fstab-generator[2844]: Ignoring "noauto" option for root device
	[  +5.977888] systemd-fstab-generator[2942]: Ignoring "noauto" option for root device
	[  +0.087497] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.007294] systemd-fstab-generator[3067]: Ignoring "noauto" option for root device
	[  +5.659838] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.356875] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.786908] systemd-fstab-generator[3885]: Ignoring "noauto" option for root device
	[Jun12 21:05] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0] <==
	{"level":"info","ts":"2024-06-12T20:58:05.48004Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-12T20:58:05.541828Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.222:2379"}
	{"level":"info","ts":"2024-06-12T20:58:59.629967Z","caller":"traceutil/trace.go:171","msg":"trace[1762105491] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:463; }","duration":"191.675434ms","start":"2024-06-12T20:58:59.438265Z","end":"2024-06-12T20:58:59.62994Z","steps":["trace[1762105491] 'read index received'  (duration: 128.54146ms)","trace[1762105491] 'applied index is now lower than readState.Index'  (duration: 63.133091ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-12T20:58:59.630163Z","caller":"traceutil/trace.go:171","msg":"trace[1100956355] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"227.451386ms","start":"2024-06-12T20:58:59.402704Z","end":"2024-06-12T20:58:59.630155Z","steps":["trace[1100956355] 'process raft request'  (duration: 164.094602ms)","trace[1100956355] 'compare'  (duration: 63.033578ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T20:58:59.630479Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.099593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-12T20:58:59.63061Z","caller":"traceutil/trace.go:171","msg":"trace[1449844035] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:443; }","duration":"192.415127ms","start":"2024-06-12T20:58:59.438179Z","end":"2024-06-12T20:58:59.630594Z","steps":["trace[1449844035] 'agreement among raft nodes before linearized reading'  (duration: 192.106437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:58:59.630659Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.842613ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-06-12T20:58:59.630725Z","caller":"traceutil/trace.go:171","msg":"trace[1893418406] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:444; }","duration":"124.979757ms","start":"2024-06-12T20:58:59.505737Z","end":"2024-06-12T20:58:59.630717Z","steps":["trace[1893418406] 'agreement among raft nodes before linearized reading'  (duration: 124.84517ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:58:59.630846Z","caller":"traceutil/trace.go:171","msg":"trace[452590583] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"190.640687ms","start":"2024-06-12T20:58:59.4402Z","end":"2024-06-12T20:58:59.63084Z","steps":["trace[452590583] 'process raft request'  (duration: 190.335178ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:59:04.110076Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.130319ms","expected-duration":"100ms","prefix":"","request":"header:<ID:694275683418241703 > lease_revoke:<id:09a2900e3e55fa25>","response":"size:28"}
	{"level":"info","ts":"2024-06-12T20:59:04.11021Z","caller":"traceutil/trace.go:171","msg":"trace[778719727] linearizableReadLoop","detail":"{readStateIndex:501; appliedIndex:500; }","duration":"285.186144ms","start":"2024-06-12T20:59:03.825011Z","end":"2024-06-12T20:59:04.110197Z","steps":["trace[778719727] 'read index received'  (duration: 31.875µs)","trace[778719727] 'applied index is now lower than readState.Index'  (duration: 285.153032ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T20:59:04.110279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.285139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-991051-m02\" ","response":"range_response_count:1 size:3022"}
	{"level":"info","ts":"2024-06-12T20:59:04.110312Z","caller":"traceutil/trace.go:171","msg":"trace[207300915] range","detail":"{range_begin:/registry/minions/multinode-991051-m02; range_end:; response_count:1; response_revision:476; }","duration":"285.353831ms","start":"2024-06-12T20:59:03.824952Z","end":"2024-06-12T20:59:04.110306Z","steps":["trace[207300915] 'agreement among raft nodes before linearized reading'  (duration: 285.276051ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:59:47.984442Z","caller":"traceutil/trace.go:171","msg":"trace[21608255] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"241.589179ms","start":"2024-06-12T20:59:47.742809Z","end":"2024-06-12T20:59:47.984399Z","steps":["trace[21608255] 'process raft request'  (duration: 234.076912ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:59:47.984763Z","caller":"traceutil/trace.go:171","msg":"trace[880145146] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"198.263278ms","start":"2024-06-12T20:59:47.786483Z","end":"2024-06-12T20:59:47.984746Z","steps":["trace[880145146] 'process raft request'  (duration: 197.800018ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T21:02:42.361495Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-12T21:02:42.361682Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-991051","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.222:2380"],"advertise-client-urls":["https://192.168.39.222:2379"]}
	{"level":"warn","ts":"2024-06-12T21:02:42.361829Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-12T21:02:42.361926Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-12T21:02:42.421297Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.222:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-12T21:02:42.421336Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.222:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-12T21:02:42.422752Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d8a7e113a49009a2","current-leader-member-id":"d8a7e113a49009a2"}
	{"level":"info","ts":"2024-06-12T21:02:42.426828Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2024-06-12T21:02:42.426929Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2024-06-12T21:02:42.426941Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-991051","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.222:2380"],"advertise-client-urls":["https://192.168.39.222:2379"]}
	
	
	==> etcd [eeba3ac7698f6380d6082ee5c673f572a710e176fb3a3d5dc6b43dfb7bb4130c] <==
	{"level":"info","ts":"2024-06-12T21:04:23.965374Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T21:04:23.965391Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T21:04:23.965661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 switched to configuration voters=(15611694107784645026)"}
	{"level":"info","ts":"2024-06-12T21:04:23.965713Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","added-peer-id":"d8a7e113a49009a2","added-peer-peer-urls":["https://192.168.39.222:2380"]}
	{"level":"info","ts":"2024-06-12T21:04:23.965848Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:04:23.965868Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:04:23.97409Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T21:04:23.974433Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d8a7e113a49009a2","initial-advertise-peer-urls":["https://192.168.39.222:2380"],"listen-peer-urls":["https://192.168.39.222:2380"],"advertise-client-urls":["https://192.168.39.222:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.222:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-12T21:04:23.974467Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-12T21:04:23.974573Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2024-06-12T21:04:23.974579Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2024-06-12T21:04:25.505866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-12T21:04:25.505927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-12T21:04:25.505963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgPreVoteResp from d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2024-06-12T21:04:25.505975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became candidate at term 3"}
	{"level":"info","ts":"2024-06-12T21:04:25.505981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgVoteResp from d8a7e113a49009a2 at term 3"}
	{"level":"info","ts":"2024-06-12T21:04:25.505988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became leader at term 3"}
	{"level":"info","ts":"2024-06-12T21:04:25.506015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d8a7e113a49009a2 elected leader d8a7e113a49009a2 at term 3"}
	{"level":"info","ts":"2024-06-12T21:04:25.513411Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d8a7e113a49009a2","local-member-attributes":"{Name:multinode-991051 ClientURLs:[https://192.168.39.222:2379]}","request-path":"/0/members/d8a7e113a49009a2/attributes","cluster-id":"26257d506d5fabfb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-12T21:04:25.513558Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:04:25.515544Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-12T21:04:25.517193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:04:25.517376Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T21:04:25.517404Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-12T21:04:25.518774Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.222:2379"}
	
	
	==> kernel <==
	 21:05:46 up 8 min,  0 users,  load average: 0.69, 0.37, 0.17
	Linux multinode-991051 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [46e15df1e1c381db7fd134e2b814595d42af6ae8a54981cc908a49c53c4a1bb9] <==
	I0612 21:04:59.513086       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:05:09.527244       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:05:09.527345       1 main.go:227] handling current node
	I0612 21:05:09.527372       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:05:09.527400       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:05:09.527535       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:05:09.527556       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:05:19.539432       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:05:19.539624       1 main.go:227] handling current node
	I0612 21:05:19.539734       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:05:19.539759       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:05:19.539904       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:05:19.539925       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:05:29.575595       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:05:29.575766       1 main.go:227] handling current node
	I0612 21:05:29.575802       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:05:29.575824       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:05:29.575984       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:05:29.576022       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:05:39.584406       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:05:39.584472       1 main.go:227] handling current node
	I0612 21:05:39.584492       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:05:39.584497       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:05:39.584677       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:05:39.584700       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c] <==
	I0612 21:01:58.698593       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:02:08.704204       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:02:08.704248       1 main.go:227] handling current node
	I0612 21:02:08.704259       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:02:08.704264       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:02:08.704395       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:02:08.704417       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:02:18.709074       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:02:18.709237       1 main.go:227] handling current node
	I0612 21:02:18.709279       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:02:18.709300       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:02:18.709454       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:02:18.709475       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:02:28.777315       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:02:28.777415       1 main.go:227] handling current node
	I0612 21:02:28.777440       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:02:28.777457       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:02:28.777592       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:02:28.777613       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:02:38.787258       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:02:38.787511       1 main.go:227] handling current node
	I0612 21:02:38.787556       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:02:38.787581       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:02:38.787752       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:02:38.787790       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf] <==
	W0612 21:02:42.380978       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381008       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381048       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381231       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381270       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381362       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381416       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381509       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381574       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381625       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381658       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.382979       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383038       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383073       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383169       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383353       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383400       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383437       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383468       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383613       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383649       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383682       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383712       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383751       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383787       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f189b9415f0984871a7f457c39dda70e32109051b8c0727a20cbd483bb4e9c8c] <==
	I0612 21:04:26.881743       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 21:04:26.881787       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 21:04:26.882967       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 21:04:26.883669       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 21:04:26.885561       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 21:04:26.885610       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 21:04:26.893035       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 21:04:26.903968       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 21:04:26.904314       1 aggregator.go:165] initial CRD sync complete...
	I0612 21:04:26.904358       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 21:04:26.904382       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 21:04:26.904405       1 cache.go:39] Caches are synced for autoregister controller
	E0612 21:04:26.912969       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0612 21:04:26.929980       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 21:04:26.938077       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 21:04:26.938142       1 policy_source.go:224] refreshing policies
	I0612 21:04:26.996918       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 21:04:27.786730       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 21:04:29.223525       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 21:04:29.353922       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 21:04:29.365063       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 21:04:29.425690       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 21:04:29.432599       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 21:04:39.617783       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0612 21:04:39.667613       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [01295b32b6815016713b036abc654cee51e14f9aba50c15ab21f991e5ea1bac3] <==
	I0612 21:04:40.009626       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 21:04:40.037864       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 21:04:40.037972       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 21:05:01.575621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.426904ms"
	I0612 21:05:01.584061       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.292269ms"
	I0612 21:05:01.584426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="158.663µs"
	I0612 21:05:06.169890       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-991051-m02\" does not exist"
	I0612 21:05:06.180445       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-991051-m02" podCIDRs=["10.244.1.0/24"]
	I0612 21:05:07.041451       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.085µs"
	I0612 21:05:07.093579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.963µs"
	I0612 21:05:07.108606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.872µs"
	I0612 21:05:07.112174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.924µs"
	I0612 21:05:07.123003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.035µs"
	I0612 21:05:07.131727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.382µs"
	I0612 21:05:10.699809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.489µs"
	I0612 21:05:14.582943       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:05:14.597862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.188µs"
	I0612 21:05:14.609513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.38µs"
	I0612 21:05:18.564487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.768761ms"
	I0612 21:05:18.565917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.599µs"
	I0612 21:05:32.655363       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:05:33.918770       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:05:33.919364       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-991051-m03\" does not exist"
	I0612 21:05:33.930170       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-991051-m03" podCIDRs=["10.244.2.0/24"]
	I0612 21:05:42.875190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	
	
	==> kube-controller-manager [3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85] <==
	I0612 20:58:59.634944       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-991051-m02\" does not exist"
	I0612 20:58:59.665590       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-991051-m02" podCIDRs=["10.244.1.0/24"]
	I0612 20:59:02.498018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-991051-m02"
	I0612 20:59:09.727052       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 20:59:11.896081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.704642ms"
	I0612 20:59:11.917585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.377762ms"
	I0612 20:59:11.917662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.664µs"
	I0612 20:59:11.917948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.385µs"
	I0612 20:59:15.435836       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.512386ms"
	I0612 20:59:15.436081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.819µs"
	I0612 20:59:16.811412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.369381ms"
	I0612 20:59:16.811492       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.522µs"
	I0612 20:59:47.987906       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 20:59:47.988022       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-991051-m03\" does not exist"
	I0612 20:59:48.017510       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-991051-m03" podCIDRs=["10.244.2.0/24"]
	I0612 20:59:52.514623       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-991051-m03"
	I0612 20:59:57.337307       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:00:25.808670       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:00:26.928549       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:00:26.928598       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-991051-m03\" does not exist"
	I0612 21:00:26.947307       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-991051-m03" podCIDRs=["10.244.3.0/24"]
	I0612 21:00:36.183578       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:01:12.567682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m03"
	I0612 21:01:12.619601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.178179ms"
	I0612 21:01:12.619823       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.857µs"
	
	
	==> kube-proxy [2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906] <==
	I0612 20:58:24.422475       1 server_linux.go:69] "Using iptables proxy"
	I0612 20:58:24.436576       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.222"]
	I0612 20:58:24.526223       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:58:24.526288       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:58:24.526305       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:58:24.529978       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:58:24.530233       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:58:24.530265       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:58:24.531940       1 config.go:192] "Starting service config controller"
	I0612 20:58:24.531972       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:58:24.532000       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:58:24.532004       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:58:24.534602       1 config.go:319] "Starting node config controller"
	I0612 20:58:24.534635       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:58:24.632324       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 20:58:24.632402       1 shared_informer.go:320] Caches are synced for service config
	I0612 20:58:24.635448       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [48723a00f68034b2b9157bc84da729cb2ba5698b870150e02f80d3c7e1621aae] <==
	I0612 21:04:28.741732       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:04:28.757931       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.222"]
	I0612 21:04:28.849421       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:04:28.849471       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:04:28.849489       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:04:28.854521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:04:28.854724       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:04:28.854737       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:04:28.864268       1 config.go:192] "Starting service config controller"
	I0612 21:04:28.864291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:04:28.864350       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:04:28.864354       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:04:28.864846       1 config.go:319] "Starting node config controller"
	I0612 21:04:28.864854       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:04:28.968543       1 shared_informer.go:320] Caches are synced for node config
	I0612 21:04:28.968574       1 shared_informer.go:320] Caches are synced for service config
	I0612 21:04:28.968626       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce] <==
	E0612 20:58:07.195337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 20:58:07.198352       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 20:58:07.198503       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 20:58:08.027001       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0612 20:58:08.027030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0612 20:58:08.033306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 20:58:08.033332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 20:58:08.057282       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0612 20:58:08.057354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0612 20:58:08.072288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0612 20:58:08.072389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0612 20:58:08.164741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0612 20:58:08.164865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0612 20:58:08.180593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 20:58:08.180682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 20:58:08.223328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0612 20:58:08.223523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0612 20:58:08.261784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:58:08.261812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0612 20:58:08.374494       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 20:58:08.375052       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 20:58:08.408945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 20:58:08.409392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 20:58:10.182710       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0612 21:02:42.357613       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [467c4660de162c74d8bc29ebfdaebba7594ac023fa9d24a9cf66e9bbf967f960] <==
	I0612 21:04:24.759223       1 serving.go:380] Generated self-signed cert in-memory
	W0612 21:04:26.818651       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0612 21:04:26.818693       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 21:04:26.818703       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0612 21:04:26.818709       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 21:04:26.863641       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 21:04:26.863688       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:04:26.867454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 21:04:26.867601       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 21:04:26.867635       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 21:04:26.867660       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 21:04:26.968220       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:04:23 multinode-991051 kubelet[3074]: E0612 21:04:23.774767    3074 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-991051&limit=500&resourceVersion=0": dial tcp 192.168.39.222:8443: connect: connection refused
	Jun 12 21:04:24 multinode-991051 kubelet[3074]: I0612 21:04:24.357213    3074 kubelet_node_status.go:73] "Attempting to register node" node="multinode-991051"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.012776    3074 kubelet_node_status.go:112] "Node was previously registered" node="multinode-991051"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.012881    3074 kubelet_node_status.go:76] "Successfully registered node" node="multinode-991051"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.014042    3074 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.015310    3074 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.829826    3074 apiserver.go:52] "Watching apiserver"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.833065    3074 topology_manager.go:215] "Topology Admit Handler" podUID="d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e" podNamespace="kube-system" podName="kindnet-f72hp"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.833259    3074 topology_manager.go:215] "Topology Admit Handler" podUID="5a2029f4-e926-41da-8fbc-b6cf94d25ad9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bfxk2"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.833332    3074 topology_manager.go:215] "Topology Admit Handler" podUID="2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d" podNamespace="kube-system" podName="kube-proxy-nqg55"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.834365    3074 topology_manager.go:215] "Topology Admit Handler" podUID="1da33189-d542-48a2-a11a-67720a303a16" podNamespace="kube-system" podName="storage-provisioner"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.834506    3074 topology_manager.go:215] "Topology Admit Handler" podUID="8f3f0e5b-62aa-4a06-8b50-45de75f7c9df" podNamespace="default" podName="busybox-fc5497c4f-846cm"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.844441    3074 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.907719    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1da33189-d542-48a2-a11a-67720a303a16-tmp\") pod \"storage-provisioner\" (UID: \"1da33189-d542-48a2-a11a-67720a303a16\") " pod="kube-system/storage-provisioner"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.907993    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e-xtables-lock\") pod \"kindnet-f72hp\" (UID: \"d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e\") " pod="kube-system/kindnet-f72hp"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.908034    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d-xtables-lock\") pod \"kube-proxy-nqg55\" (UID: \"2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d\") " pod="kube-system/kube-proxy-nqg55"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.908172    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d-lib-modules\") pod \"kube-proxy-nqg55\" (UID: \"2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d\") " pod="kube-system/kube-proxy-nqg55"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.908301    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e-cni-cfg\") pod \"kindnet-f72hp\" (UID: \"d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e\") " pod="kube-system/kindnet-f72hp"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.908398    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e-lib-modules\") pod \"kindnet-f72hp\" (UID: \"d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e\") " pod="kube-system/kindnet-f72hp"
	Jun 12 21:04:35 multinode-991051 kubelet[3074]: I0612 21:04:35.603526    3074 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 12 21:05:22 multinode-991051 kubelet[3074]: E0612 21:05:22.903681    3074 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:05:22 multinode-991051 kubelet[3074]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:05:22 multinode-991051 kubelet[3074]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:05:22 multinode-991051 kubelet[3074]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:05:22 multinode-991051 kubelet[3074]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:05:45.476984   52013 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17779-14199/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-991051 -n multinode-991051
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-991051 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (308.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 stop
E0612 21:06:48.613374   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-991051 stop: exit status 82 (2m0.4608366s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-991051-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-991051 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-991051 status: exit status 3 (18.730899231s)

                                                
                                                
-- stdout --
	multinode-991051
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-991051-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:08:08.751516   52695 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	E0612 21:08:08.751549   52695 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-991051 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-991051 -n multinode-991051
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-991051 logs -n 25: (1.464805178s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp multinode-991051-m02:/home/docker/cp-test.txt                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051:/home/docker/cp-test_multinode-991051-m02_multinode-991051.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n multinode-991051 sudo cat                                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | /home/docker/cp-test_multinode-991051-m02_multinode-991051.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp multinode-991051-m02:/home/docker/cp-test.txt                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03:/home/docker/cp-test_multinode-991051-m02_multinode-991051-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n multinode-991051-m03 sudo cat                                   | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | /home/docker/cp-test_multinode-991051-m02_multinode-991051-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp testdata/cp-test.txt                                                | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp multinode-991051-m03:/home/docker/cp-test.txt                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile839762677/001/cp-test_multinode-991051-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp multinode-991051-m03:/home/docker/cp-test.txt                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051:/home/docker/cp-test_multinode-991051-m03_multinode-991051.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n multinode-991051 sudo cat                                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | /home/docker/cp-test_multinode-991051-m03_multinode-991051.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-991051 cp multinode-991051-m03:/home/docker/cp-test.txt                       | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m02:/home/docker/cp-test_multinode-991051-m03_multinode-991051-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n                                                                 | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | multinode-991051-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-991051 ssh -n multinode-991051-m02 sudo cat                                   | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | /home/docker/cp-test_multinode-991051-m03_multinode-991051-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-991051 node stop m03                                                          | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	| node    | multinode-991051 node start                                                             | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC | 12 Jun 24 21:00 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-991051                                                                | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC |                     |
	| stop    | -p multinode-991051                                                                     | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:00 UTC |                     |
	| start   | -p multinode-991051                                                                     | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:02 UTC | 12 Jun 24 21:05 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-991051                                                                | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:05 UTC |                     |
	| node    | multinode-991051 node delete                                                            | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:05 UTC | 12 Jun 24 21:05 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-991051 stop                                                                   | multinode-991051 | jenkins | v1.33.1 | 12 Jun 24 21:05 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:02:41
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:02:41.560828   50965 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:02:41.561090   50965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:02:41.561100   50965 out.go:304] Setting ErrFile to fd 2...
	I0612 21:02:41.561105   50965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:02:41.561358   50965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:02:41.561943   50965 out.go:298] Setting JSON to false
	I0612 21:02:41.563122   50965 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6307,"bootTime":1718219855,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:02:41.563354   50965 start.go:139] virtualization: kvm guest
	I0612 21:02:41.565940   50965 out.go:177] * [multinode-991051] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:02:41.567775   50965 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:02:41.567718   50965 notify.go:220] Checking for updates...
	I0612 21:02:41.569355   50965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:02:41.570837   50965 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:02:41.572337   50965 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:02:41.573682   50965 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:02:41.574842   50965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:02:41.576689   50965 config.go:182] Loaded profile config "multinode-991051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:02:41.576804   50965 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:02:41.577260   50965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:02:41.577312   50965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:02:41.592924   50965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I0612 21:02:41.593305   50965 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:02:41.593931   50965 main.go:141] libmachine: Using API Version  1
	I0612 21:02:41.593958   50965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:02:41.594327   50965 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:02:41.594560   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:02:41.630095   50965 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:02:41.631534   50965 start.go:297] selected driver: kvm2
	I0612 21:02:41.631552   50965 start.go:901] validating driver "kvm2" against &{Name:multinode-991051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-991051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:02:41.631707   50965 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:02:41.632050   50965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:02:41.632147   50965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:02:41.647752   50965 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:02:41.648486   50965 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:02:41.648552   50965 cni.go:84] Creating CNI manager for ""
	I0612 21:02:41.648563   50965 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0612 21:02:41.648615   50965 start.go:340] cluster config:
	{Name:multinode-991051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-991051 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:02:41.648738   50965 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:02:41.650733   50965 out.go:177] * Starting "multinode-991051" primary control-plane node in "multinode-991051" cluster
	I0612 21:02:41.651996   50965 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:02:41.652033   50965 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 21:02:41.652040   50965 cache.go:56] Caching tarball of preloaded images
	I0612 21:02:41.652161   50965 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:02:41.652176   50965 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 21:02:41.652296   50965 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/config.json ...
	I0612 21:02:41.652498   50965 start.go:360] acquireMachinesLock for multinode-991051: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:02:41.652543   50965 start.go:364] duration metric: took 25.837µs to acquireMachinesLock for "multinode-991051"
	I0612 21:02:41.652560   50965 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:02:41.652579   50965 fix.go:54] fixHost starting: 
	I0612 21:02:41.652817   50965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:02:41.652854   50965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:02:41.667685   50965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45277
	I0612 21:02:41.668112   50965 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:02:41.668546   50965 main.go:141] libmachine: Using API Version  1
	I0612 21:02:41.668564   50965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:02:41.668898   50965 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:02:41.669102   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:02:41.669268   50965 main.go:141] libmachine: (multinode-991051) Calling .GetState
	I0612 21:02:41.670761   50965 fix.go:112] recreateIfNeeded on multinode-991051: state=Running err=<nil>
	W0612 21:02:41.670787   50965 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:02:41.672788   50965 out.go:177] * Updating the running kvm2 "multinode-991051" VM ...
	I0612 21:02:41.674303   50965 machine.go:94] provisionDockerMachine start ...
	I0612 21:02:41.674325   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:02:41.674532   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:41.676878   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.677410   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:41.677440   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.677613   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:02:41.677783   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.677941   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.678076   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:02:41.678240   50965 main.go:141] libmachine: Using SSH client type: native
	I0612 21:02:41.678417   50965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0612 21:02:41.678427   50965 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:02:41.784724   50965 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-991051
	
	I0612 21:02:41.784748   50965 main.go:141] libmachine: (multinode-991051) Calling .GetMachineName
	I0612 21:02:41.784993   50965 buildroot.go:166] provisioning hostname "multinode-991051"
	I0612 21:02:41.785022   50965 main.go:141] libmachine: (multinode-991051) Calling .GetMachineName
	I0612 21:02:41.785188   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:41.788277   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.788769   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:41.788802   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.788901   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:02:41.789082   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.789242   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.789384   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:02:41.789538   50965 main.go:141] libmachine: Using SSH client type: native
	I0612 21:02:41.789716   50965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0612 21:02:41.789740   50965 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-991051 && echo "multinode-991051" | sudo tee /etc/hostname
	I0612 21:02:41.908580   50965 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-991051
	
	I0612 21:02:41.908607   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:41.911098   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.911495   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:41.911523   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:41.911687   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:02:41.911888   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.912014   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:41.912164   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:02:41.912302   50965 main.go:141] libmachine: Using SSH client type: native
	I0612 21:02:41.912485   50965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0612 21:02:41.912507   50965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-991051' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-991051/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-991051' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:02:42.016566   50965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:02:42.016620   50965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:02:42.016659   50965 buildroot.go:174] setting up certificates
	I0612 21:02:42.016672   50965 provision.go:84] configureAuth start
	I0612 21:02:42.016689   50965 main.go:141] libmachine: (multinode-991051) Calling .GetMachineName
	I0612 21:02:42.016948   50965 main.go:141] libmachine: (multinode-991051) Calling .GetIP
	I0612 21:02:42.019343   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.019717   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:42.019744   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.019864   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:42.022110   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.022473   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:42.022501   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.022649   50965 provision.go:143] copyHostCerts
	I0612 21:02:42.022682   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:02:42.022731   50965 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:02:42.022740   50965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:02:42.022823   50965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:02:42.022917   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:02:42.022942   50965 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:02:42.022952   50965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:02:42.022987   50965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:02:42.023061   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:02:42.023091   50965 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:02:42.023100   50965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:02:42.023132   50965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:02:42.023203   50965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.multinode-991051 san=[127.0.0.1 192.168.39.222 localhost minikube multinode-991051]
	I0612 21:02:42.077719   50965 provision.go:177] copyRemoteCerts
	I0612 21:02:42.077773   50965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:02:42.077793   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:42.080158   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.080455   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:42.080487   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.080658   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:02:42.080827   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:42.080980   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:02:42.081159   50965 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/multinode-991051/id_rsa Username:docker}
	I0612 21:02:42.164442   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0612 21:02:42.164524   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:02:42.191120   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0612 21:02:42.191208   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0612 21:02:42.216174   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0612 21:02:42.216254   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:02:42.241394   50965 provision.go:87] duration metric: took 224.705211ms to configureAuth
	I0612 21:02:42.241439   50965 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:02:42.241688   50965 config.go:182] Loaded profile config "multinode-991051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:02:42.241756   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:02:42.244235   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.244639   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:02:42.244679   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:02:42.244868   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:02:42.245048   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:42.245203   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:02:42.245368   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:02:42.245528   50965 main.go:141] libmachine: Using SSH client type: native
	I0612 21:02:42.245726   50965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0612 21:02:42.245746   50965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:04:13.100101   50965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:04:13.100130   50965 machine.go:97] duration metric: took 1m31.425815689s to provisionDockerMachine
	I0612 21:04:13.100148   50965 start.go:293] postStartSetup for "multinode-991051" (driver="kvm2")
	I0612 21:04:13.100174   50965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:04:13.100211   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:04:13.100556   50965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:04:13.100589   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:04:13.103889   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.104410   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:13.104443   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.104615   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:04:13.104812   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:04:13.105099   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:04:13.105243   50965 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/multinode-991051/id_rsa Username:docker}
	I0612 21:04:13.187857   50965 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:04:13.192342   50965 command_runner.go:130] > NAME=Buildroot
	I0612 21:04:13.192358   50965 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0612 21:04:13.192363   50965 command_runner.go:130] > ID=buildroot
	I0612 21:04:13.192368   50965 command_runner.go:130] > VERSION_ID=2023.02.9
	I0612 21:04:13.192376   50965 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0612 21:04:13.192397   50965 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:04:13.192414   50965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:04:13.192481   50965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:04:13.192557   50965 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:04:13.192568   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /etc/ssl/certs/214442.pem
	I0612 21:04:13.192654   50965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:04:13.202763   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:04:13.227793   50965 start.go:296] duration metric: took 127.629637ms for postStartSetup
	I0612 21:04:13.227865   50965 fix.go:56] duration metric: took 1m31.575292097s for fixHost
	I0612 21:04:13.227891   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:04:13.230482   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.230900   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:13.230926   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.231077   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:04:13.231267   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:04:13.231419   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:04:13.231557   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:04:13.231751   50965 main.go:141] libmachine: Using SSH client type: native
	I0612 21:04:13.231911   50965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0612 21:04:13.231921   50965 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:04:13.332212   50965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718226253.307559674
	
	I0612 21:04:13.332237   50965 fix.go:216] guest clock: 1718226253.307559674
	I0612 21:04:13.332243   50965 fix.go:229] Guest: 2024-06-12 21:04:13.307559674 +0000 UTC Remote: 2024-06-12 21:04:13.227870843 +0000 UTC m=+91.702103711 (delta=79.688831ms)
	I0612 21:04:13.332268   50965 fix.go:200] guest clock delta is within tolerance: 79.688831ms
	I0612 21:04:13.332272   50965 start.go:83] releasing machines lock for "multinode-991051", held for 1m31.679719168s
	I0612 21:04:13.332301   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:04:13.332562   50965 main.go:141] libmachine: (multinode-991051) Calling .GetIP
	I0612 21:04:13.335254   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.335617   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:13.335641   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.335840   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:04:13.336548   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:04:13.336766   50965 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:04:13.336866   50965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:04:13.336915   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:04:13.337007   50965 ssh_runner.go:195] Run: cat /version.json
	I0612 21:04:13.337027   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:04:13.339754   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.339823   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.340143   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:13.340162   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.340178   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:13.340201   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:13.340338   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:04:13.340472   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:04:13.340546   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:04:13.340629   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:04:13.340688   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:04:13.340831   50965 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:04:13.340848   50965 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/multinode-991051/id_rsa Username:docker}
	I0612 21:04:13.340937   50965 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/multinode-991051/id_rsa Username:docker}
	I0612 21:04:13.426323   50965 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0612 21:04:13.426835   50965 ssh_runner.go:195] Run: systemctl --version
	I0612 21:04:13.452322   50965 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0612 21:04:13.452360   50965 command_runner.go:130] > systemd 252 (252)
	I0612 21:04:13.452398   50965 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0612 21:04:13.452474   50965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:04:13.629439   50965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 21:04:13.637420   50965 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0612 21:04:13.637705   50965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:04:13.637767   50965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:04:13.677848   50965 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0612 21:04:13.677890   50965 start.go:494] detecting cgroup driver to use...
	I0612 21:04:13.677970   50965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:04:13.696870   50965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:04:13.713136   50965 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:04:13.713186   50965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:04:13.727143   50965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:04:13.740953   50965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:04:13.887957   50965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:04:14.036302   50965 docker.go:233] disabling docker service ...
	I0612 21:04:14.036379   50965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:04:14.054356   50965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:04:14.068441   50965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:04:14.215814   50965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:04:14.364290   50965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:04:14.378568   50965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:04:14.398442   50965 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0612 21:04:14.398498   50965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:04:14.398553   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.411379   50965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:04:14.411464   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.423229   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.435757   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.447736   50965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:04:14.460212   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.471962   50965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.484209   50965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:04:14.495578   50965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:04:14.505738   50965 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0612 21:04:14.505820   50965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:04:14.516112   50965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:04:14.656386   50965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:04:20.127985   50965 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.471566471s)
	I0612 21:04:20.128022   50965 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:04:20.128066   50965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:04:20.133277   50965 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0612 21:04:20.133306   50965 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0612 21:04:20.133329   50965 command_runner.go:130] > Device: 0,22	Inode: 1342        Links: 1
	I0612 21:04:20.133341   50965 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0612 21:04:20.133349   50965 command_runner.go:130] > Access: 2024-06-12 21:04:19.979078385 +0000
	I0612 21:04:20.133358   50965 command_runner.go:130] > Modify: 2024-06-12 21:04:19.979078385 +0000
	I0612 21:04:20.133366   50965 command_runner.go:130] > Change: 2024-06-12 21:04:19.979078385 +0000
	I0612 21:04:20.133371   50965 command_runner.go:130] >  Birth: -
	I0612 21:04:20.133419   50965 start.go:562] Will wait 60s for crictl version
	I0612 21:04:20.133466   50965 ssh_runner.go:195] Run: which crictl
	I0612 21:04:20.137444   50965 command_runner.go:130] > /usr/bin/crictl
	I0612 21:04:20.137547   50965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:04:20.178278   50965 command_runner.go:130] > Version:  0.1.0
	I0612 21:04:20.178305   50965 command_runner.go:130] > RuntimeName:  cri-o
	I0612 21:04:20.178313   50965 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0612 21:04:20.178322   50965 command_runner.go:130] > RuntimeApiVersion:  v1
	I0612 21:04:20.178344   50965 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:04:20.178396   50965 ssh_runner.go:195] Run: crio --version
	I0612 21:04:20.207067   50965 command_runner.go:130] > crio version 1.29.1
	I0612 21:04:20.207097   50965 command_runner.go:130] > Version:        1.29.1
	I0612 21:04:20.207107   50965 command_runner.go:130] > GitCommit:      unknown
	I0612 21:04:20.207113   50965 command_runner.go:130] > GitCommitDate:  unknown
	I0612 21:04:20.207125   50965 command_runner.go:130] > GitTreeState:   clean
	I0612 21:04:20.207143   50965 command_runner.go:130] > BuildDate:      2024-06-06T15:30:03Z
	I0612 21:04:20.207147   50965 command_runner.go:130] > GoVersion:      go1.21.6
	I0612 21:04:20.207151   50965 command_runner.go:130] > Compiler:       gc
	I0612 21:04:20.207155   50965 command_runner.go:130] > Platform:       linux/amd64
	I0612 21:04:20.207159   50965 command_runner.go:130] > Linkmode:       dynamic
	I0612 21:04:20.207163   50965 command_runner.go:130] > BuildTags:      
	I0612 21:04:20.207167   50965 command_runner.go:130] >   containers_image_ostree_stub
	I0612 21:04:20.207189   50965 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0612 21:04:20.207195   50965 command_runner.go:130] >   btrfs_noversion
	I0612 21:04:20.207203   50965 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0612 21:04:20.207211   50965 command_runner.go:130] >   libdm_no_deferred_remove
	I0612 21:04:20.207216   50965 command_runner.go:130] >   seccomp
	I0612 21:04:20.207222   50965 command_runner.go:130] > LDFlags:          unknown
	I0612 21:04:20.207226   50965 command_runner.go:130] > SeccompEnabled:   true
	I0612 21:04:20.207232   50965 command_runner.go:130] > AppArmorEnabled:  false
	I0612 21:04:20.207321   50965 ssh_runner.go:195] Run: crio --version
	I0612 21:04:20.236266   50965 command_runner.go:130] > crio version 1.29.1
	I0612 21:04:20.236293   50965 command_runner.go:130] > Version:        1.29.1
	I0612 21:04:20.236302   50965 command_runner.go:130] > GitCommit:      unknown
	I0612 21:04:20.236309   50965 command_runner.go:130] > GitCommitDate:  unknown
	I0612 21:04:20.236317   50965 command_runner.go:130] > GitTreeState:   clean
	I0612 21:04:20.236326   50965 command_runner.go:130] > BuildDate:      2024-06-06T15:30:03Z
	I0612 21:04:20.236333   50965 command_runner.go:130] > GoVersion:      go1.21.6
	I0612 21:04:20.236340   50965 command_runner.go:130] > Compiler:       gc
	I0612 21:04:20.236348   50965 command_runner.go:130] > Platform:       linux/amd64
	I0612 21:04:20.236355   50965 command_runner.go:130] > Linkmode:       dynamic
	I0612 21:04:20.236362   50965 command_runner.go:130] > BuildTags:      
	I0612 21:04:20.236374   50965 command_runner.go:130] >   containers_image_ostree_stub
	I0612 21:04:20.236382   50965 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0612 21:04:20.236389   50965 command_runner.go:130] >   btrfs_noversion
	I0612 21:04:20.236396   50965 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0612 21:04:20.236403   50965 command_runner.go:130] >   libdm_no_deferred_remove
	I0612 21:04:20.236409   50965 command_runner.go:130] >   seccomp
	I0612 21:04:20.236419   50965 command_runner.go:130] > LDFlags:          unknown
	I0612 21:04:20.236426   50965 command_runner.go:130] > SeccompEnabled:   true
	I0612 21:04:20.236435   50965 command_runner.go:130] > AppArmorEnabled:  false
	I0612 21:04:20.239737   50965 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:04:20.241132   50965 main.go:141] libmachine: (multinode-991051) Calling .GetIP
	I0612 21:04:20.243954   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:20.244357   50965 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:04:20.244384   50965 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:04:20.244556   50965 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 21:04:20.248987   50965 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0612 21:04:20.249164   50965 kubeadm.go:877] updating cluster {Name:multinode-991051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-991051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:04:20.249319   50965 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:04:20.249374   50965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:04:20.299319   50965 command_runner.go:130] > {
	I0612 21:04:20.299341   50965 command_runner.go:130] >   "images": [
	I0612 21:04:20.299355   50965 command_runner.go:130] >     {
	I0612 21:04:20.299366   50965 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0612 21:04:20.299373   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.299384   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0612 21:04:20.299390   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299401   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.299413   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0612 21:04:20.299424   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0612 21:04:20.299433   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299442   50965 command_runner.go:130] >       "size": "65291810",
	I0612 21:04:20.299452   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.299460   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.299473   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.299483   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.299489   50965 command_runner.go:130] >     },
	I0612 21:04:20.299494   50965 command_runner.go:130] >     {
	I0612 21:04:20.299505   50965 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0612 21:04:20.299515   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.299524   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0612 21:04:20.299530   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299540   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.299553   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0612 21:04:20.299571   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0612 21:04:20.299578   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299585   50965 command_runner.go:130] >       "size": "65908273",
	I0612 21:04:20.299593   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.299604   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.299614   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.299621   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.299627   50965 command_runner.go:130] >     },
	I0612 21:04:20.299634   50965 command_runner.go:130] >     {
	I0612 21:04:20.299647   50965 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0612 21:04:20.299658   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.299670   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0612 21:04:20.299679   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299687   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.299703   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0612 21:04:20.299719   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0612 21:04:20.299729   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299737   50965 command_runner.go:130] >       "size": "1363676",
	I0612 21:04:20.299745   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.299759   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.299769   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.299778   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.299787   50965 command_runner.go:130] >     },
	I0612 21:04:20.299793   50965 command_runner.go:130] >     {
	I0612 21:04:20.299806   50965 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0612 21:04:20.299814   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.299827   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0612 21:04:20.299835   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299843   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.299859   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0612 21:04:20.299885   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0612 21:04:20.299893   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299900   50965 command_runner.go:130] >       "size": "31470524",
	I0612 21:04:20.299906   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.299913   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.299923   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.299931   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.299940   50965 command_runner.go:130] >     },
	I0612 21:04:20.299946   50965 command_runner.go:130] >     {
	I0612 21:04:20.299961   50965 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0612 21:04:20.299970   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.299981   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0612 21:04:20.299989   50965 command_runner.go:130] >       ],
	I0612 21:04:20.299996   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300012   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0612 21:04:20.300044   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0612 21:04:20.300053   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300058   50965 command_runner.go:130] >       "size": "61245718",
	I0612 21:04:20.300065   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.300073   50965 command_runner.go:130] >       "username": "nonroot",
	I0612 21:04:20.300082   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300089   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300102   50965 command_runner.go:130] >     },
	I0612 21:04:20.300111   50965 command_runner.go:130] >     {
	I0612 21:04:20.300124   50965 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0612 21:04:20.300139   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300151   50965 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0612 21:04:20.300160   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300167   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300179   50965 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0612 21:04:20.300194   50965 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0612 21:04:20.300204   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300212   50965 command_runner.go:130] >       "size": "150779692",
	I0612 21:04:20.300219   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.300229   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.300236   50965 command_runner.go:130] >       },
	I0612 21:04:20.300250   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.300260   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300268   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300274   50965 command_runner.go:130] >     },
	I0612 21:04:20.300278   50965 command_runner.go:130] >     {
	I0612 21:04:20.300288   50965 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0612 21:04:20.300298   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300311   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0612 21:04:20.300319   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300327   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300342   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0612 21:04:20.300359   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0612 21:04:20.300367   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300374   50965 command_runner.go:130] >       "size": "117601759",
	I0612 21:04:20.300383   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.300390   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.300400   50965 command_runner.go:130] >       },
	I0612 21:04:20.300407   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.300417   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300425   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300432   50965 command_runner.go:130] >     },
	I0612 21:04:20.300440   50965 command_runner.go:130] >     {
	I0612 21:04:20.300452   50965 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0612 21:04:20.300461   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300471   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0612 21:04:20.300487   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300497   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300529   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0612 21:04:20.300545   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0612 21:04:20.300554   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300562   50965 command_runner.go:130] >       "size": "112170310",
	I0612 21:04:20.300571   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.300578   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.300586   50965 command_runner.go:130] >       },
	I0612 21:04:20.300591   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.300596   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300601   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300606   50965 command_runner.go:130] >     },
	I0612 21:04:20.300612   50965 command_runner.go:130] >     {
	I0612 21:04:20.300621   50965 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0612 21:04:20.300627   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300635   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0612 21:04:20.300640   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300647   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300659   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0612 21:04:20.300672   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0612 21:04:20.300678   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300684   50965 command_runner.go:130] >       "size": "85933465",
	I0612 21:04:20.300691   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.300697   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.300703   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300713   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300720   50965 command_runner.go:130] >     },
	I0612 21:04:20.300728   50965 command_runner.go:130] >     {
	I0612 21:04:20.300742   50965 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0612 21:04:20.300752   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300761   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0612 21:04:20.300771   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300780   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300796   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0612 21:04:20.300812   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0612 21:04:20.300829   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300839   50965 command_runner.go:130] >       "size": "63026504",
	I0612 21:04:20.300849   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.300857   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.300865   50965 command_runner.go:130] >       },
	I0612 21:04:20.300872   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.300882   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.300888   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.300896   50965 command_runner.go:130] >     },
	I0612 21:04:20.300903   50965 command_runner.go:130] >     {
	I0612 21:04:20.300916   50965 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0612 21:04:20.300927   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.300937   50965 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0612 21:04:20.300945   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300952   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.300967   50965 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0612 21:04:20.300982   50965 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0612 21:04:20.300991   50965 command_runner.go:130] >       ],
	I0612 21:04:20.300999   50965 command_runner.go:130] >       "size": "750414",
	I0612 21:04:20.301008   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.301017   50965 command_runner.go:130] >         "value": "65535"
	I0612 21:04:20.301031   50965 command_runner.go:130] >       },
	I0612 21:04:20.301041   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.301051   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.301059   50965 command_runner.go:130] >       "pinned": true
	I0612 21:04:20.301067   50965 command_runner.go:130] >     }
	I0612 21:04:20.301073   50965 command_runner.go:130] >   ]
	I0612 21:04:20.301081   50965 command_runner.go:130] > }
	I0612 21:04:20.301300   50965 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:04:20.301314   50965 crio.go:433] Images already preloaded, skipping extraction
	I0612 21:04:20.301383   50965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:04:20.342496   50965 command_runner.go:130] > {
	I0612 21:04:20.342525   50965 command_runner.go:130] >   "images": [
	I0612 21:04:20.342532   50965 command_runner.go:130] >     {
	I0612 21:04:20.342549   50965 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0612 21:04:20.342553   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342563   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0612 21:04:20.342567   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342571   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342579   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0612 21:04:20.342586   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0612 21:04:20.342590   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342595   50965 command_runner.go:130] >       "size": "65291810",
	I0612 21:04:20.342598   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.342602   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.342607   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.342611   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.342615   50965 command_runner.go:130] >     },
	I0612 21:04:20.342618   50965 command_runner.go:130] >     {
	I0612 21:04:20.342630   50965 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0612 21:04:20.342638   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342642   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0612 21:04:20.342646   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342649   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342656   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0612 21:04:20.342662   50965 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0612 21:04:20.342666   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342671   50965 command_runner.go:130] >       "size": "65908273",
	I0612 21:04:20.342674   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.342682   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.342688   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.342701   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.342707   50965 command_runner.go:130] >     },
	I0612 21:04:20.342710   50965 command_runner.go:130] >     {
	I0612 21:04:20.342717   50965 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0612 21:04:20.342723   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342728   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0612 21:04:20.342731   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342735   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342745   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0612 21:04:20.342751   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0612 21:04:20.342758   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342761   50965 command_runner.go:130] >       "size": "1363676",
	I0612 21:04:20.342765   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.342769   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.342773   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.342777   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.342780   50965 command_runner.go:130] >     },
	I0612 21:04:20.342783   50965 command_runner.go:130] >     {
	I0612 21:04:20.342789   50965 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0612 21:04:20.342796   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342801   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0612 21:04:20.342806   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342811   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342820   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0612 21:04:20.342840   50965 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0612 21:04:20.342846   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342850   50965 command_runner.go:130] >       "size": "31470524",
	I0612 21:04:20.342854   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.342858   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.342861   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.342865   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.342869   50965 command_runner.go:130] >     },
	I0612 21:04:20.342872   50965 command_runner.go:130] >     {
	I0612 21:04:20.342879   50965 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0612 21:04:20.342883   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342888   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0612 21:04:20.342894   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342897   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342904   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0612 21:04:20.342914   50965 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0612 21:04:20.342917   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342922   50965 command_runner.go:130] >       "size": "61245718",
	I0612 21:04:20.342928   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.342931   50965 command_runner.go:130] >       "username": "nonroot",
	I0612 21:04:20.342935   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.342938   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.342942   50965 command_runner.go:130] >     },
	I0612 21:04:20.342945   50965 command_runner.go:130] >     {
	I0612 21:04:20.342953   50965 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0612 21:04:20.342960   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.342964   50965 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0612 21:04:20.342970   50965 command_runner.go:130] >       ],
	I0612 21:04:20.342974   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.342982   50965 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0612 21:04:20.342991   50965 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0612 21:04:20.342999   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343008   50965 command_runner.go:130] >       "size": "150779692",
	I0612 21:04:20.343016   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.343019   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.343025   50965 command_runner.go:130] >       },
	I0612 21:04:20.343034   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343040   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343044   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.343050   50965 command_runner.go:130] >     },
	I0612 21:04:20.343053   50965 command_runner.go:130] >     {
	I0612 21:04:20.343061   50965 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0612 21:04:20.343066   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.343071   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0612 21:04:20.343077   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343081   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.343091   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0612 21:04:20.343100   50965 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0612 21:04:20.343106   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343110   50965 command_runner.go:130] >       "size": "117601759",
	I0612 21:04:20.343116   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.343120   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.343125   50965 command_runner.go:130] >       },
	I0612 21:04:20.343129   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343135   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343139   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.343145   50965 command_runner.go:130] >     },
	I0612 21:04:20.343148   50965 command_runner.go:130] >     {
	I0612 21:04:20.343156   50965 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0612 21:04:20.343162   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.343184   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0612 21:04:20.343193   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343199   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.343223   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0612 21:04:20.343235   50965 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0612 21:04:20.343239   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343248   50965 command_runner.go:130] >       "size": "112170310",
	I0612 21:04:20.343254   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.343258   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.343264   50965 command_runner.go:130] >       },
	I0612 21:04:20.343268   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343274   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343283   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.343289   50965 command_runner.go:130] >     },
	I0612 21:04:20.343292   50965 command_runner.go:130] >     {
	I0612 21:04:20.343300   50965 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0612 21:04:20.343304   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.343311   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0612 21:04:20.343315   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343319   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.343326   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0612 21:04:20.343336   50965 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0612 21:04:20.343340   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343344   50965 command_runner.go:130] >       "size": "85933465",
	I0612 21:04:20.343347   50965 command_runner.go:130] >       "uid": null,
	I0612 21:04:20.343351   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343355   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343359   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.343363   50965 command_runner.go:130] >     },
	I0612 21:04:20.343366   50965 command_runner.go:130] >     {
	I0612 21:04:20.343372   50965 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0612 21:04:20.343376   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.343381   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0612 21:04:20.343385   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343388   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.343398   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0612 21:04:20.343404   50965 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0612 21:04:20.343410   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343413   50965 command_runner.go:130] >       "size": "63026504",
	I0612 21:04:20.343417   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.343421   50965 command_runner.go:130] >         "value": "0"
	I0612 21:04:20.343433   50965 command_runner.go:130] >       },
	I0612 21:04:20.343437   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343441   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343448   50965 command_runner.go:130] >       "pinned": false
	I0612 21:04:20.343451   50965 command_runner.go:130] >     },
	I0612 21:04:20.343454   50965 command_runner.go:130] >     {
	I0612 21:04:20.343462   50965 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0612 21:04:20.343470   50965 command_runner.go:130] >       "repoTags": [
	I0612 21:04:20.343477   50965 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0612 21:04:20.343480   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343484   50965 command_runner.go:130] >       "repoDigests": [
	I0612 21:04:20.343495   50965 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0612 21:04:20.343505   50965 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0612 21:04:20.343508   50965 command_runner.go:130] >       ],
	I0612 21:04:20.343512   50965 command_runner.go:130] >       "size": "750414",
	I0612 21:04:20.343515   50965 command_runner.go:130] >       "uid": {
	I0612 21:04:20.343519   50965 command_runner.go:130] >         "value": "65535"
	I0612 21:04:20.343524   50965 command_runner.go:130] >       },
	I0612 21:04:20.343531   50965 command_runner.go:130] >       "username": "",
	I0612 21:04:20.343537   50965 command_runner.go:130] >       "spec": null,
	I0612 21:04:20.343545   50965 command_runner.go:130] >       "pinned": true
	I0612 21:04:20.343550   50965 command_runner.go:130] >     }
	I0612 21:04:20.343558   50965 command_runner.go:130] >   ]
	I0612 21:04:20.343562   50965 command_runner.go:130] > }
	I0612 21:04:20.343736   50965 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:04:20.343751   50965 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:04:20.343767   50965 kubeadm.go:928] updating node { 192.168.39.222 8443 v1.30.1 crio true true} ...
	I0612 21:04:20.343884   50965 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-991051 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-991051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:04:20.343945   50965 ssh_runner.go:195] Run: crio config
	I0612 21:04:20.387207   50965 command_runner.go:130] ! time="2024-06-12 21:04:20.362161457Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0612 21:04:20.392923   50965 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0612 21:04:20.406718   50965 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0612 21:04:20.406743   50965 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0612 21:04:20.406750   50965 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0612 21:04:20.406753   50965 command_runner.go:130] > #
	I0612 21:04:20.406759   50965 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0612 21:04:20.406765   50965 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0612 21:04:20.406771   50965 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0612 21:04:20.406778   50965 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0612 21:04:20.406781   50965 command_runner.go:130] > # reload'.
	I0612 21:04:20.406787   50965 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0612 21:04:20.406792   50965 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0612 21:04:20.406798   50965 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0612 21:04:20.406803   50965 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0612 21:04:20.406806   50965 command_runner.go:130] > [crio]
	I0612 21:04:20.406812   50965 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0612 21:04:20.406817   50965 command_runner.go:130] > # containers images, in this directory.
	I0612 21:04:20.406831   50965 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0612 21:04:20.406841   50965 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0612 21:04:20.406849   50965 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0612 21:04:20.406859   50965 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0612 21:04:20.406862   50965 command_runner.go:130] > # imagestore = ""
	I0612 21:04:20.406868   50965 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0612 21:04:20.406875   50965 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0612 21:04:20.406879   50965 command_runner.go:130] > storage_driver = "overlay"
	I0612 21:04:20.406895   50965 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0612 21:04:20.406903   50965 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0612 21:04:20.406907   50965 command_runner.go:130] > storage_option = [
	I0612 21:04:20.406911   50965 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0612 21:04:20.406914   50965 command_runner.go:130] > ]
	I0612 21:04:20.406920   50965 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0612 21:04:20.406926   50965 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0612 21:04:20.406931   50965 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0612 21:04:20.406936   50965 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0612 21:04:20.406942   50965 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0612 21:04:20.406946   50965 command_runner.go:130] > # always happen on a node reboot
	I0612 21:04:20.406954   50965 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0612 21:04:20.406966   50965 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0612 21:04:20.406974   50965 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0612 21:04:20.406979   50965 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0612 21:04:20.406983   50965 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0612 21:04:20.406993   50965 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0612 21:04:20.407000   50965 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0612 21:04:20.407008   50965 command_runner.go:130] > # internal_wipe = true
	I0612 21:04:20.407019   50965 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0612 21:04:20.407025   50965 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0612 21:04:20.407031   50965 command_runner.go:130] > # internal_repair = false
	I0612 21:04:20.407036   50965 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0612 21:04:20.407045   50965 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0612 21:04:20.407050   50965 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0612 21:04:20.407057   50965 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0612 21:04:20.407063   50965 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0612 21:04:20.407068   50965 command_runner.go:130] > [crio.api]
	I0612 21:04:20.407077   50965 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0612 21:04:20.407085   50965 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0612 21:04:20.407090   50965 command_runner.go:130] > # IP address on which the stream server will listen.
	I0612 21:04:20.407096   50965 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0612 21:04:20.407102   50965 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0612 21:04:20.407109   50965 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0612 21:04:20.407113   50965 command_runner.go:130] > # stream_port = "0"
	I0612 21:04:20.407118   50965 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0612 21:04:20.407125   50965 command_runner.go:130] > # stream_enable_tls = false
	I0612 21:04:20.407140   50965 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0612 21:04:20.407147   50965 command_runner.go:130] > # stream_idle_timeout = ""
	I0612 21:04:20.407153   50965 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0612 21:04:20.407161   50965 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0612 21:04:20.407165   50965 command_runner.go:130] > # minutes.
	I0612 21:04:20.407181   50965 command_runner.go:130] > # stream_tls_cert = ""
	I0612 21:04:20.407193   50965 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0612 21:04:20.407206   50965 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0612 21:04:20.407213   50965 command_runner.go:130] > # stream_tls_key = ""
	I0612 21:04:20.407219   50965 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0612 21:04:20.407229   50965 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0612 21:04:20.407261   50965 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0612 21:04:20.407269   50965 command_runner.go:130] > # stream_tls_ca = ""
	I0612 21:04:20.407276   50965 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0612 21:04:20.407280   50965 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0612 21:04:20.407287   50965 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0612 21:04:20.407292   50965 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0612 21:04:20.407302   50965 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0612 21:04:20.407309   50965 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0612 21:04:20.407315   50965 command_runner.go:130] > [crio.runtime]
	I0612 21:04:20.407321   50965 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0612 21:04:20.407329   50965 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0612 21:04:20.407332   50965 command_runner.go:130] > # "nofile=1024:2048"
	I0612 21:04:20.407341   50965 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0612 21:04:20.407345   50965 command_runner.go:130] > # default_ulimits = [
	I0612 21:04:20.407348   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407354   50965 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0612 21:04:20.407365   50965 command_runner.go:130] > # no_pivot = false
	I0612 21:04:20.407371   50965 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0612 21:04:20.407379   50965 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0612 21:04:20.407384   50965 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0612 21:04:20.407391   50965 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0612 21:04:20.407396   50965 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0612 21:04:20.407405   50965 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0612 21:04:20.407410   50965 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0612 21:04:20.407416   50965 command_runner.go:130] > # Cgroup setting for conmon
	I0612 21:04:20.407423   50965 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0612 21:04:20.407429   50965 command_runner.go:130] > conmon_cgroup = "pod"
	I0612 21:04:20.407435   50965 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0612 21:04:20.407440   50965 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0612 21:04:20.407447   50965 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0612 21:04:20.407452   50965 command_runner.go:130] > conmon_env = [
	I0612 21:04:20.407457   50965 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0612 21:04:20.407463   50965 command_runner.go:130] > ]
	I0612 21:04:20.407468   50965 command_runner.go:130] > # Additional environment variables to set for all the
	I0612 21:04:20.407473   50965 command_runner.go:130] > # containers. These are overridden if set in the
	I0612 21:04:20.407481   50965 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0612 21:04:20.407485   50965 command_runner.go:130] > # default_env = [
	I0612 21:04:20.407491   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407496   50965 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0612 21:04:20.407505   50965 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0612 21:04:20.407509   50965 command_runner.go:130] > # selinux = false
	I0612 21:04:20.407517   50965 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0612 21:04:20.407522   50965 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0612 21:04:20.407528   50965 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0612 21:04:20.407532   50965 command_runner.go:130] > # seccomp_profile = ""
	I0612 21:04:20.407537   50965 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0612 21:04:20.407543   50965 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0612 21:04:20.407551   50965 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0612 21:04:20.407556   50965 command_runner.go:130] > # which might increase security.
	I0612 21:04:20.407561   50965 command_runner.go:130] > # This option is currently deprecated,
	I0612 21:04:20.407566   50965 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0612 21:04:20.407573   50965 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0612 21:04:20.407583   50965 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0612 21:04:20.407591   50965 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0612 21:04:20.407597   50965 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0612 21:04:20.407605   50965 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0612 21:04:20.407610   50965 command_runner.go:130] > # This option supports live configuration reload.
	I0612 21:04:20.407617   50965 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0612 21:04:20.407622   50965 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0612 21:04:20.407626   50965 command_runner.go:130] > # the cgroup blockio controller.
	I0612 21:04:20.407630   50965 command_runner.go:130] > # blockio_config_file = ""
	I0612 21:04:20.407639   50965 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0612 21:04:20.407645   50965 command_runner.go:130] > # blockio parameters.
	I0612 21:04:20.407648   50965 command_runner.go:130] > # blockio_reload = false
	I0612 21:04:20.407657   50965 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0612 21:04:20.407661   50965 command_runner.go:130] > # irqbalance daemon.
	I0612 21:04:20.407674   50965 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0612 21:04:20.407682   50965 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0612 21:04:20.407689   50965 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0612 21:04:20.407697   50965 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0612 21:04:20.407703   50965 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0612 21:04:20.407712   50965 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0612 21:04:20.407717   50965 command_runner.go:130] > # This option supports live configuration reload.
	I0612 21:04:20.407721   50965 command_runner.go:130] > # rdt_config_file = ""
	I0612 21:04:20.407726   50965 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0612 21:04:20.407733   50965 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0612 21:04:20.407759   50965 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0612 21:04:20.407767   50965 command_runner.go:130] > # separate_pull_cgroup = ""
	I0612 21:04:20.407773   50965 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0612 21:04:20.407778   50965 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0612 21:04:20.407782   50965 command_runner.go:130] > # will be added.
	I0612 21:04:20.407786   50965 command_runner.go:130] > # default_capabilities = [
	I0612 21:04:20.407790   50965 command_runner.go:130] > # 	"CHOWN",
	I0612 21:04:20.407793   50965 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0612 21:04:20.407797   50965 command_runner.go:130] > # 	"FSETID",
	I0612 21:04:20.407800   50965 command_runner.go:130] > # 	"FOWNER",
	I0612 21:04:20.407804   50965 command_runner.go:130] > # 	"SETGID",
	I0612 21:04:20.407807   50965 command_runner.go:130] > # 	"SETUID",
	I0612 21:04:20.407816   50965 command_runner.go:130] > # 	"SETPCAP",
	I0612 21:04:20.407823   50965 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0612 21:04:20.407826   50965 command_runner.go:130] > # 	"KILL",
	I0612 21:04:20.407829   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407836   50965 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0612 21:04:20.407845   50965 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0612 21:04:20.407849   50965 command_runner.go:130] > # add_inheritable_capabilities = false
	I0612 21:04:20.407856   50965 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0612 21:04:20.407863   50965 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0612 21:04:20.407867   50965 command_runner.go:130] > default_sysctls = [
	I0612 21:04:20.407873   50965 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0612 21:04:20.407876   50965 command_runner.go:130] > ]
	I0612 21:04:20.407881   50965 command_runner.go:130] > # List of devices on the host that a
	I0612 21:04:20.407889   50965 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0612 21:04:20.407893   50965 command_runner.go:130] > # allowed_devices = [
	I0612 21:04:20.407900   50965 command_runner.go:130] > # 	"/dev/fuse",
	I0612 21:04:20.407903   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407908   50965 command_runner.go:130] > # List of additional devices. specified as
	I0612 21:04:20.407917   50965 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0612 21:04:20.407922   50965 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0612 21:04:20.407930   50965 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0612 21:04:20.407934   50965 command_runner.go:130] > # additional_devices = [
	I0612 21:04:20.407937   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407942   50965 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0612 21:04:20.407949   50965 command_runner.go:130] > # cdi_spec_dirs = [
	I0612 21:04:20.407953   50965 command_runner.go:130] > # 	"/etc/cdi",
	I0612 21:04:20.407958   50965 command_runner.go:130] > # 	"/var/run/cdi",
	I0612 21:04:20.407962   50965 command_runner.go:130] > # ]
	I0612 21:04:20.407968   50965 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0612 21:04:20.407974   50965 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0612 21:04:20.407978   50965 command_runner.go:130] > # Defaults to false.
	I0612 21:04:20.407983   50965 command_runner.go:130] > # device_ownership_from_security_context = false
	I0612 21:04:20.407990   50965 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0612 21:04:20.407996   50965 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0612 21:04:20.408000   50965 command_runner.go:130] > # hooks_dir = [
	I0612 21:04:20.408007   50965 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0612 21:04:20.408017   50965 command_runner.go:130] > # ]
	I0612 21:04:20.408023   50965 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0612 21:04:20.408031   50965 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0612 21:04:20.408036   50965 command_runner.go:130] > # its default mounts from the following two files:
	I0612 21:04:20.408040   50965 command_runner.go:130] > #
	I0612 21:04:20.408046   50965 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0612 21:04:20.408054   50965 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0612 21:04:20.408059   50965 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0612 21:04:20.408065   50965 command_runner.go:130] > #
	I0612 21:04:20.408071   50965 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0612 21:04:20.408077   50965 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0612 21:04:20.408083   50965 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0612 21:04:20.408090   50965 command_runner.go:130] > #      only add mounts it finds in this file.
	I0612 21:04:20.408093   50965 command_runner.go:130] > #
	I0612 21:04:20.408097   50965 command_runner.go:130] > # default_mounts_file = ""
	I0612 21:04:20.408104   50965 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0612 21:04:20.408110   50965 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0612 21:04:20.408117   50965 command_runner.go:130] > pids_limit = 1024
	I0612 21:04:20.408122   50965 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0612 21:04:20.408130   50965 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0612 21:04:20.408138   50965 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0612 21:04:20.408145   50965 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0612 21:04:20.408152   50965 command_runner.go:130] > # log_size_max = -1
	I0612 21:04:20.408158   50965 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0612 21:04:20.408162   50965 command_runner.go:130] > # log_to_journald = false
	I0612 21:04:20.408170   50965 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0612 21:04:20.408175   50965 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0612 21:04:20.408182   50965 command_runner.go:130] > # Path to directory for container attach sockets.
	I0612 21:04:20.408187   50965 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0612 21:04:20.408194   50965 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0612 21:04:20.408199   50965 command_runner.go:130] > # bind_mount_prefix = ""
	I0612 21:04:20.408205   50965 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0612 21:04:20.408209   50965 command_runner.go:130] > # read_only = false
	I0612 21:04:20.408214   50965 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0612 21:04:20.408227   50965 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0612 21:04:20.408234   50965 command_runner.go:130] > # live configuration reload.
	I0612 21:04:20.408246   50965 command_runner.go:130] > # log_level = "info"
	I0612 21:04:20.408257   50965 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0612 21:04:20.408264   50965 command_runner.go:130] > # This option supports live configuration reload.
	I0612 21:04:20.408268   50965 command_runner.go:130] > # log_filter = ""
	I0612 21:04:20.408274   50965 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0612 21:04:20.408280   50965 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0612 21:04:20.408286   50965 command_runner.go:130] > # separated by comma.
	I0612 21:04:20.408293   50965 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0612 21:04:20.408299   50965 command_runner.go:130] > # uid_mappings = ""
	I0612 21:04:20.408305   50965 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0612 21:04:20.408315   50965 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0612 21:04:20.408319   50965 command_runner.go:130] > # separated by comma.
	I0612 21:04:20.408329   50965 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0612 21:04:20.408336   50965 command_runner.go:130] > # gid_mappings = ""
	I0612 21:04:20.408342   50965 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0612 21:04:20.408350   50965 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0612 21:04:20.408359   50965 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0612 21:04:20.408369   50965 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0612 21:04:20.408373   50965 command_runner.go:130] > # minimum_mappable_uid = -1
	I0612 21:04:20.408380   50965 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0612 21:04:20.408390   50965 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0612 21:04:20.408398   50965 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0612 21:04:20.408405   50965 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0612 21:04:20.408412   50965 command_runner.go:130] > # minimum_mappable_gid = -1
	I0612 21:04:20.408418   50965 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0612 21:04:20.408426   50965 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0612 21:04:20.408432   50965 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0612 21:04:20.408438   50965 command_runner.go:130] > # ctr_stop_timeout = 30
	I0612 21:04:20.408444   50965 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0612 21:04:20.408450   50965 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0612 21:04:20.408454   50965 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0612 21:04:20.408459   50965 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0612 21:04:20.408466   50965 command_runner.go:130] > drop_infra_ctr = false
	I0612 21:04:20.408471   50965 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0612 21:04:20.408479   50965 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0612 21:04:20.408487   50965 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0612 21:04:20.408497   50965 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0612 21:04:20.408507   50965 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0612 21:04:20.408513   50965 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0612 21:04:20.408520   50965 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0612 21:04:20.408525   50965 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0612 21:04:20.408531   50965 command_runner.go:130] > # shared_cpuset = ""
	I0612 21:04:20.408536   50965 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0612 21:04:20.408543   50965 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0612 21:04:20.408552   50965 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0612 21:04:20.408564   50965 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0612 21:04:20.408570   50965 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0612 21:04:20.408575   50965 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0612 21:04:20.408583   50965 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0612 21:04:20.408587   50965 command_runner.go:130] > # enable_criu_support = false
	I0612 21:04:20.408595   50965 command_runner.go:130] > # Enable/disable the generation of the container,
	I0612 21:04:20.408600   50965 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0612 21:04:20.408607   50965 command_runner.go:130] > # enable_pod_events = false
	I0612 21:04:20.408613   50965 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0612 21:04:20.408618   50965 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0612 21:04:20.408626   50965 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0612 21:04:20.408629   50965 command_runner.go:130] > # default_runtime = "runc"
	I0612 21:04:20.408635   50965 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0612 21:04:20.408642   50965 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0612 21:04:20.408653   50965 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0612 21:04:20.408660   50965 command_runner.go:130] > # creation as a file is not desired either.
	I0612 21:04:20.408671   50965 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0612 21:04:20.408678   50965 command_runner.go:130] > # the hostname is being managed dynamically.
	I0612 21:04:20.408682   50965 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0612 21:04:20.408688   50965 command_runner.go:130] > # ]
	I0612 21:04:20.408694   50965 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0612 21:04:20.408702   50965 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0612 21:04:20.408709   50965 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0612 21:04:20.408716   50965 command_runner.go:130] > # Each entry in the table should follow the format:
	I0612 21:04:20.408719   50965 command_runner.go:130] > #
	I0612 21:04:20.408724   50965 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0612 21:04:20.408737   50965 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0612 21:04:20.408782   50965 command_runner.go:130] > # runtime_type = "oci"
	I0612 21:04:20.408790   50965 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0612 21:04:20.408794   50965 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0612 21:04:20.408799   50965 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0612 21:04:20.408803   50965 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0612 21:04:20.408807   50965 command_runner.go:130] > # monitor_env = []
	I0612 21:04:20.408811   50965 command_runner.go:130] > # privileged_without_host_devices = false
	I0612 21:04:20.408815   50965 command_runner.go:130] > # allowed_annotations = []
	I0612 21:04:20.408820   50965 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0612 21:04:20.408826   50965 command_runner.go:130] > # Where:
	I0612 21:04:20.408832   50965 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0612 21:04:20.408840   50965 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0612 21:04:20.408846   50965 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0612 21:04:20.408854   50965 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0612 21:04:20.408858   50965 command_runner.go:130] > #   in $PATH.
	I0612 21:04:20.408863   50965 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0612 21:04:20.408871   50965 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0612 21:04:20.408877   50965 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0612 21:04:20.408883   50965 command_runner.go:130] > #   state.
	I0612 21:04:20.408889   50965 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0612 21:04:20.408897   50965 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0612 21:04:20.408902   50965 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0612 21:04:20.408907   50965 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0612 21:04:20.408916   50965 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0612 21:04:20.408922   50965 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0612 21:04:20.408929   50965 command_runner.go:130] > #   The currently recognized values are:
	I0612 21:04:20.408935   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0612 21:04:20.408942   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0612 21:04:20.408950   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0612 21:04:20.408958   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0612 21:04:20.408966   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0612 21:04:20.408974   50965 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0612 21:04:20.408980   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0612 21:04:20.408988   50965 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0612 21:04:20.408994   50965 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0612 21:04:20.409004   50965 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0612 21:04:20.409016   50965 command_runner.go:130] > #   deprecated option "conmon".
	I0612 21:04:20.409025   50965 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0612 21:04:20.409031   50965 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0612 21:04:20.409040   50965 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0612 21:04:20.409045   50965 command_runner.go:130] > #   should be moved to the container's cgroup
	I0612 21:04:20.409054   50965 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0612 21:04:20.409059   50965 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0612 21:04:20.409068   50965 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0612 21:04:20.409073   50965 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0612 21:04:20.409078   50965 command_runner.go:130] > #
	I0612 21:04:20.409083   50965 command_runner.go:130] > # Using the seccomp notifier feature:
	I0612 21:04:20.409086   50965 command_runner.go:130] > #
	I0612 21:04:20.409092   50965 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0612 21:04:20.409100   50965 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0612 21:04:20.409103   50965 command_runner.go:130] > #
	I0612 21:04:20.409109   50965 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0612 21:04:20.409117   50965 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0612 21:04:20.409120   50965 command_runner.go:130] > #
	I0612 21:04:20.409126   50965 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0612 21:04:20.409132   50965 command_runner.go:130] > # feature.
	I0612 21:04:20.409135   50965 command_runner.go:130] > #
	I0612 21:04:20.409140   50965 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0612 21:04:20.409146   50965 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0612 21:04:20.409152   50965 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0612 21:04:20.409161   50965 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0612 21:04:20.409166   50965 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0612 21:04:20.409172   50965 command_runner.go:130] > #
	I0612 21:04:20.409178   50965 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0612 21:04:20.409186   50965 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0612 21:04:20.409190   50965 command_runner.go:130] > #
	I0612 21:04:20.409195   50965 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0612 21:04:20.409203   50965 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0612 21:04:20.409206   50965 command_runner.go:130] > #
	I0612 21:04:20.409212   50965 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0612 21:04:20.409220   50965 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0612 21:04:20.409223   50965 command_runner.go:130] > # limitation.
	I0612 21:04:20.409235   50965 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0612 21:04:20.409242   50965 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0612 21:04:20.409245   50965 command_runner.go:130] > runtime_type = "oci"
	I0612 21:04:20.409249   50965 command_runner.go:130] > runtime_root = "/run/runc"
	I0612 21:04:20.409259   50965 command_runner.go:130] > runtime_config_path = ""
	I0612 21:04:20.409264   50965 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0612 21:04:20.409270   50965 command_runner.go:130] > monitor_cgroup = "pod"
	I0612 21:04:20.409274   50965 command_runner.go:130] > monitor_exec_cgroup = ""
	I0612 21:04:20.409279   50965 command_runner.go:130] > monitor_env = [
	I0612 21:04:20.409284   50965 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0612 21:04:20.409288   50965 command_runner.go:130] > ]
	I0612 21:04:20.409293   50965 command_runner.go:130] > privileged_without_host_devices = false
	I0612 21:04:20.409301   50965 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0612 21:04:20.409306   50965 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0612 21:04:20.409318   50965 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0612 21:04:20.409328   50965 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0612 21:04:20.409335   50965 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0612 21:04:20.409345   50965 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0612 21:04:20.409353   50965 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0612 21:04:20.409363   50965 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0612 21:04:20.409368   50965 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0612 21:04:20.409374   50965 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0612 21:04:20.409378   50965 command_runner.go:130] > # Example:
	I0612 21:04:20.409381   50965 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0612 21:04:20.409385   50965 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0612 21:04:20.409390   50965 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0612 21:04:20.409394   50965 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0612 21:04:20.409397   50965 command_runner.go:130] > # cpuset = 0
	I0612 21:04:20.409401   50965 command_runner.go:130] > # cpushares = "0-1"
	I0612 21:04:20.409404   50965 command_runner.go:130] > # Where:
	I0612 21:04:20.409408   50965 command_runner.go:130] > # The workload name is workload-type.
	I0612 21:04:20.409415   50965 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0612 21:04:20.409420   50965 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0612 21:04:20.409425   50965 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0612 21:04:20.409432   50965 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0612 21:04:20.409437   50965 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0612 21:04:20.409450   50965 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0612 21:04:20.409456   50965 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0612 21:04:20.409460   50965 command_runner.go:130] > # Default value is set to true
	I0612 21:04:20.409464   50965 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0612 21:04:20.409469   50965 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0612 21:04:20.409473   50965 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0612 21:04:20.409477   50965 command_runner.go:130] > # Default value is set to 'false'
	I0612 21:04:20.409481   50965 command_runner.go:130] > # disable_hostport_mapping = false
	I0612 21:04:20.409487   50965 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0612 21:04:20.409489   50965 command_runner.go:130] > #
	I0612 21:04:20.409495   50965 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0612 21:04:20.409502   50965 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0612 21:04:20.409508   50965 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0612 21:04:20.409514   50965 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0612 21:04:20.409519   50965 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0612 21:04:20.409522   50965 command_runner.go:130] > [crio.image]
	I0612 21:04:20.409528   50965 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0612 21:04:20.409532   50965 command_runner.go:130] > # default_transport = "docker://"
	I0612 21:04:20.409537   50965 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0612 21:04:20.409543   50965 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0612 21:04:20.409549   50965 command_runner.go:130] > # global_auth_file = ""
	I0612 21:04:20.409553   50965 command_runner.go:130] > # The image used to instantiate infra containers.
	I0612 21:04:20.409558   50965 command_runner.go:130] > # This option supports live configuration reload.
	I0612 21:04:20.409562   50965 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0612 21:04:20.409568   50965 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0612 21:04:20.409575   50965 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0612 21:04:20.409580   50965 command_runner.go:130] > # This option supports live configuration reload.
	I0612 21:04:20.409587   50965 command_runner.go:130] > # pause_image_auth_file = ""
	I0612 21:04:20.409592   50965 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0612 21:04:20.409599   50965 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0612 21:04:20.409604   50965 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0612 21:04:20.409612   50965 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0612 21:04:20.409616   50965 command_runner.go:130] > # pause_command = "/pause"
	I0612 21:04:20.409624   50965 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0612 21:04:20.409630   50965 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0612 21:04:20.409637   50965 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0612 21:04:20.409647   50965 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0612 21:04:20.409654   50965 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0612 21:04:20.409659   50965 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0612 21:04:20.409665   50965 command_runner.go:130] > # pinned_images = [
	I0612 21:04:20.409669   50965 command_runner.go:130] > # ]
	I0612 21:04:20.409675   50965 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0612 21:04:20.409682   50965 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0612 21:04:20.409688   50965 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0612 21:04:20.409696   50965 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0612 21:04:20.409701   50965 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0612 21:04:20.409707   50965 command_runner.go:130] > # signature_policy = ""
	I0612 21:04:20.409712   50965 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0612 21:04:20.409721   50965 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0612 21:04:20.409728   50965 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0612 21:04:20.409736   50965 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0612 21:04:20.409741   50965 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0612 21:04:20.409748   50965 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0612 21:04:20.409754   50965 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0612 21:04:20.409762   50965 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0612 21:04:20.409766   50965 command_runner.go:130] > # changing them here.
	I0612 21:04:20.409772   50965 command_runner.go:130] > # insecure_registries = [
	I0612 21:04:20.409776   50965 command_runner.go:130] > # ]
	I0612 21:04:20.409785   50965 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0612 21:04:20.409789   50965 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0612 21:04:20.409799   50965 command_runner.go:130] > # image_volumes = "mkdir"
	I0612 21:04:20.409806   50965 command_runner.go:130] > # Temporary directory to use for storing big files
	I0612 21:04:20.409810   50965 command_runner.go:130] > # big_files_temporary_dir = ""
	I0612 21:04:20.409819   50965 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0612 21:04:20.409823   50965 command_runner.go:130] > # CNI plugins.
	I0612 21:04:20.409828   50965 command_runner.go:130] > [crio.network]
	I0612 21:04:20.409835   50965 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0612 21:04:20.409842   50965 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0612 21:04:20.409846   50965 command_runner.go:130] > # cni_default_network = ""
	I0612 21:04:20.409853   50965 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0612 21:04:20.409858   50965 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0612 21:04:20.409865   50965 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0612 21:04:20.409874   50965 command_runner.go:130] > # plugin_dirs = [
	I0612 21:04:20.409880   50965 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0612 21:04:20.409883   50965 command_runner.go:130] > # ]
	I0612 21:04:20.409888   50965 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0612 21:04:20.409893   50965 command_runner.go:130] > [crio.metrics]
	I0612 21:04:20.409897   50965 command_runner.go:130] > # Globally enable or disable metrics support.
	I0612 21:04:20.409902   50965 command_runner.go:130] > enable_metrics = true
	I0612 21:04:20.409906   50965 command_runner.go:130] > # Specify enabled metrics collectors.
	I0612 21:04:20.409911   50965 command_runner.go:130] > # Per default all metrics are enabled.
	I0612 21:04:20.409916   50965 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0612 21:04:20.409925   50965 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0612 21:04:20.409930   50965 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0612 21:04:20.409936   50965 command_runner.go:130] > # metrics_collectors = [
	I0612 21:04:20.409940   50965 command_runner.go:130] > # 	"operations",
	I0612 21:04:20.409944   50965 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0612 21:04:20.409951   50965 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0612 21:04:20.409955   50965 command_runner.go:130] > # 	"operations_errors",
	I0612 21:04:20.409961   50965 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0612 21:04:20.409965   50965 command_runner.go:130] > # 	"image_pulls_by_name",
	I0612 21:04:20.409970   50965 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0612 21:04:20.409974   50965 command_runner.go:130] > # 	"image_pulls_failures",
	I0612 21:04:20.409980   50965 command_runner.go:130] > # 	"image_pulls_successes",
	I0612 21:04:20.409984   50965 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0612 21:04:20.409988   50965 command_runner.go:130] > # 	"image_layer_reuse",
	I0612 21:04:20.409994   50965 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0612 21:04:20.409999   50965 command_runner.go:130] > # 	"containers_oom_total",
	I0612 21:04:20.410004   50965 command_runner.go:130] > # 	"containers_oom",
	I0612 21:04:20.410008   50965 command_runner.go:130] > # 	"processes_defunct",
	I0612 21:04:20.410012   50965 command_runner.go:130] > # 	"operations_total",
	I0612 21:04:20.410016   50965 command_runner.go:130] > # 	"operations_latency_seconds",
	I0612 21:04:20.410022   50965 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0612 21:04:20.410026   50965 command_runner.go:130] > # 	"operations_errors_total",
	I0612 21:04:20.410032   50965 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0612 21:04:20.410036   50965 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0612 21:04:20.410043   50965 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0612 21:04:20.410048   50965 command_runner.go:130] > # 	"image_pulls_success_total",
	I0612 21:04:20.410056   50965 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0612 21:04:20.410063   50965 command_runner.go:130] > # 	"containers_oom_count_total",
	I0612 21:04:20.410068   50965 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0612 21:04:20.410074   50965 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0612 21:04:20.410077   50965 command_runner.go:130] > # ]
	I0612 21:04:20.410083   50965 command_runner.go:130] > # The port on which the metrics server will listen.
	I0612 21:04:20.410089   50965 command_runner.go:130] > # metrics_port = 9090
	I0612 21:04:20.410093   50965 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0612 21:04:20.410103   50965 command_runner.go:130] > # metrics_socket = ""
	I0612 21:04:20.410110   50965 command_runner.go:130] > # The certificate for the secure metrics server.
	I0612 21:04:20.410116   50965 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0612 21:04:20.410124   50965 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0612 21:04:20.410129   50965 command_runner.go:130] > # certificate on any modification event.
	I0612 21:04:20.410135   50965 command_runner.go:130] > # metrics_cert = ""
	I0612 21:04:20.410140   50965 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0612 21:04:20.410147   50965 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0612 21:04:20.410151   50965 command_runner.go:130] > # metrics_key = ""
	I0612 21:04:20.410158   50965 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0612 21:04:20.410162   50965 command_runner.go:130] > [crio.tracing]
	I0612 21:04:20.410169   50965 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0612 21:04:20.410173   50965 command_runner.go:130] > # enable_tracing = false
	I0612 21:04:20.410181   50965 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0612 21:04:20.410185   50965 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0612 21:04:20.410194   50965 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0612 21:04:20.410198   50965 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0612 21:04:20.410204   50965 command_runner.go:130] > # CRI-O NRI configuration.
	I0612 21:04:20.410208   50965 command_runner.go:130] > [crio.nri]
	I0612 21:04:20.410212   50965 command_runner.go:130] > # Globally enable or disable NRI.
	I0612 21:04:20.410217   50965 command_runner.go:130] > # enable_nri = false
	I0612 21:04:20.410221   50965 command_runner.go:130] > # NRI socket to listen on.
	I0612 21:04:20.410228   50965 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0612 21:04:20.410233   50965 command_runner.go:130] > # NRI plugin directory to use.
	I0612 21:04:20.410239   50965 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0612 21:04:20.410244   50965 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0612 21:04:20.410249   50965 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0612 21:04:20.410257   50965 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0612 21:04:20.410270   50965 command_runner.go:130] > # nri_disable_connections = false
	I0612 21:04:20.410278   50965 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0612 21:04:20.410282   50965 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0612 21:04:20.410287   50965 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0612 21:04:20.410294   50965 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0612 21:04:20.410299   50965 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0612 21:04:20.410303   50965 command_runner.go:130] > [crio.stats]
	I0612 21:04:20.410308   50965 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0612 21:04:20.410316   50965 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0612 21:04:20.410320   50965 command_runner.go:130] > # stats_collection_period = 0
	I0612 21:04:20.410475   50965 cni.go:84] Creating CNI manager for ""
	I0612 21:04:20.410488   50965 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0612 21:04:20.410501   50965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:04:20.410527   50965 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-991051 NodeName:multinode-991051 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:04:20.410652   50965 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-991051"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:04:20.410718   50965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:04:20.424154   50965 command_runner.go:130] > kubeadm
	I0612 21:04:20.424176   50965 command_runner.go:130] > kubectl
	I0612 21:04:20.424183   50965 command_runner.go:130] > kubelet
	I0612 21:04:20.424204   50965 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:04:20.424295   50965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:04:20.436620   50965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0612 21:04:20.455596   50965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:04:20.473711   50965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0612 21:04:20.491616   50965 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0612 21:04:20.495723   50965 command_runner.go:130] > 192.168.39.222	control-plane.minikube.internal
	I0612 21:04:20.495895   50965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:04:20.643275   50965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:04:20.659243   50965 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051 for IP: 192.168.39.222
	I0612 21:04:20.659263   50965 certs.go:194] generating shared ca certs ...
	I0612 21:04:20.659289   50965 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:04:20.659489   50965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:04:20.659544   50965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:04:20.659557   50965 certs.go:256] generating profile certs ...
	I0612 21:04:20.659677   50965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/client.key
	I0612 21:04:20.659764   50965 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/apiserver.key.36fb12b1
	I0612 21:04:20.659824   50965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/proxy-client.key
	I0612 21:04:20.659839   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 21:04:20.659858   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0612 21:04:20.659875   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 21:04:20.659891   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 21:04:20.659906   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 21:04:20.659925   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 21:04:20.659942   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 21:04:20.659959   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 21:04:20.660033   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:04:20.660067   50965 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:04:20.660077   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:04:20.660109   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:04:20.660139   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:04:20.660170   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:04:20.660224   50965 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:04:20.660265   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> /usr/share/ca-certificates/214442.pem
	I0612 21:04:20.660294   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:04:20.660314   50965 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem -> /usr/share/ca-certificates/21444.pem
	I0612 21:04:20.661120   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:04:20.688195   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:04:20.713286   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:04:20.738379   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:04:20.762997   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 21:04:20.787921   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 21:04:20.812579   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:04:20.837459   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/multinode-991051/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:04:20.862646   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:04:20.889791   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:04:20.915765   50965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:04:20.941067   50965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:04:20.958214   50965 ssh_runner.go:195] Run: openssl version
	I0612 21:04:20.964639   50965 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0612 21:04:20.964735   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:04:20.976733   50965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:04:20.981426   50965 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:04:20.981473   50965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:04:20.981509   50965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:04:20.987568   50965 command_runner.go:130] > 51391683
	I0612 21:04:20.987639   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:04:20.997654   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:04:21.008966   50965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:04:21.013316   50965 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:04:21.013357   50965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:04:21.013395   50965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:04:21.019000   50965 command_runner.go:130] > 3ec20f2e
	I0612 21:04:21.019071   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:04:21.029206   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:04:21.040649   50965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:04:21.045322   50965 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:04:21.045354   50965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:04:21.045394   50965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:04:21.050928   50965 command_runner.go:130] > b5213941
	I0612 21:04:21.051290   50965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:04:21.061315   50965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:04:21.065905   50965 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:04:21.065934   50965 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0612 21:04:21.065942   50965 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0612 21:04:21.065951   50965 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0612 21:04:21.065960   50965 command_runner.go:130] > Access: 2024-06-12 20:58:00.839277397 +0000
	I0612 21:04:21.065966   50965 command_runner.go:130] > Modify: 2024-06-12 20:58:00.839277397 +0000
	I0612 21:04:21.065973   50965 command_runner.go:130] > Change: 2024-06-12 20:58:00.839277397 +0000
	I0612 21:04:21.065981   50965 command_runner.go:130] >  Birth: 2024-06-12 20:58:00.839277397 +0000
	I0612 21:04:21.066084   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:04:21.071946   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.072024   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:04:21.077561   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.077803   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:04:21.083525   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.083583   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:04:21.089340   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.089390   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:04:21.094951   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.094996   50965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:04:21.100458   50965 command_runner.go:130] > Certificate will not expire
	I0612 21:04:21.100625   50965 kubeadm.go:391] StartCluster: {Name:multinode-991051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-991051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:04:21.100811   50965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:04:21.100867   50965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:04:21.141500   50965 command_runner.go:130] > 55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a
	I0612 21:04:21.141534   50965 command_runner.go:130] > 5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13
	I0612 21:04:21.141544   50965 command_runner.go:130] > 98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c
	I0612 21:04:21.141553   50965 command_runner.go:130] > 2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906
	I0612 21:04:21.141562   50965 command_runner.go:130] > e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf
	I0612 21:04:21.141571   50965 command_runner.go:130] > 3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce
	I0612 21:04:21.141580   50965 command_runner.go:130] > 3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85
	I0612 21:04:21.141591   50965 command_runner.go:130] > 40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0
	I0612 21:04:21.141626   50965 cri.go:89] found id: "55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a"
	I0612 21:04:21.141646   50965 cri.go:89] found id: "5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13"
	I0612 21:04:21.141651   50965 cri.go:89] found id: "98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c"
	I0612 21:04:21.141655   50965 cri.go:89] found id: "2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906"
	I0612 21:04:21.141658   50965 cri.go:89] found id: "e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf"
	I0612 21:04:21.141661   50965 cri.go:89] found id: "3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce"
	I0612 21:04:21.141681   50965 cri.go:89] found id: "3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85"
	I0612 21:04:21.141688   50965 cri.go:89] found id: "40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0"
	I0612 21:04:21.141690   50965 cri.go:89] found id: ""
	I0612 21:04:21.141740   50965 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.354181951Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718226489354159055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0d99293-4256-4f1e-a388-4771080b7e48 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.354854915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa350923-9f9a-4c6d-a9d6-7906fd1ca5c5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.354966882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa350923-9f9a-4c6d-a9d6-7906fd1ca5c5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.355361667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e356af2991acd35e8c5e1010c2edcfafcdaa44202f7a7de1f64fdcb129b1cb97,PodSandboxId:2765d8d89dc60b11465338bb625cf83233ae0c47977122526dda4e2c3eb8de0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718226302132060909,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e15df1e1c381db7fd134e2b814595d42af6ae8a54981cc908a49c53c4a1bb9,PodSandboxId:9f85e7d9a139355a4d11c93ef8d33423360aa0622ee97fb6e5a8846239efb0c1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718226268653878749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00d5ce7c2be9f85077bc8e0388d9fa32ba1bda0561e11f78b247f01d99da3d6,PodSandboxId:d5431b2fdc6ccb5032e7e75dee7a0bcdc31ee038c99fa77cb164f49b50497852,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718226268505600075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48723a00f68034b2b9157bc84da729cb2ba5698b870150e02f80d3c7e1621aae,PodSandboxId:835ea78f2a30143650553e31633dddd64d8b30b2506bed7f27aea0cb8bf3a695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718226268445869395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1282d310fbf74c5662700466fab7cb94876f3856f4651e5f83284f2361bd8724,PodSandboxId:2377cfd1f1177a0030b2481ab2e4ad7abf93a225046e0458e3e1fbb8b2a3da91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718226268350438967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f189b9415f0984871a7f457c39dda70e32109051b8c0727a20cbd483bb4e9c8c,PodSandboxId:34f76e53eae69485c9673bb9813104abd4aeabf142b53b0ee79e0f471b99cc02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718226263608310806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5a0dd458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeba3ac7698f6380d6082ee5c673f572a710e176fb3a3d5dc6b43dfb7bb4130c,PodSandboxId:b9069b6210b629e3d6551e12622c1deadccfb1e5282b8305a936196343dc7e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718226263537240543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]string{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01295b32b6815016713b036abc654cee51e14f9aba50c15ab21f991e5ea1bac3,PodSandboxId:b1f7f88ba9fcf4d8c5436e7c2b210e62b6b270bd56b89dd703af6152ddf286a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718226263492477646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467c4660de162c74d8bc29ebfdaebba7594ac023fa9d24a9cf66e9bbf967f960,PodSandboxId:b52346ae1ba5d421641ac822dcfa3dad8012e8185faab1a12b5318e8e6d999d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718226263472607810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3e772291419e269d7d384e11d755c67e1382c12181adfa9479ea1f2d722dee,PodSandboxId:27760f2e721b4cabd837e0e00013bfb9abfc74b3640aebacc2c0dc2c6f63291d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718225956372753821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a,PodSandboxId:747bf00d4dc3c16fb5474ececcbda50427fd76c65921e57da775b0343ac22a12,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718225909231185107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13,PodSandboxId:28629b256cb1868a1ac54f06575a79eb7f183a2451add37a0f1a6b4c33e855cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718225909175177713,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.kubernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c,PodSandboxId:b8a668a1284ee4597b6e3789502bc8ef03720dd341f92e3ea388cf996f7b0a4a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718225907816411305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906,PodSandboxId:b5f91e0ef8f81e93939cf8164692777100c10dfb5084c1d282f6a852c7a5d430,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718225904083066654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce,PodSandboxId:e5bd299b8eaf1d06cd44d6ddacc1fef873a8b45636dd63f5e5b6848973158413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718225884347734931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf,PodSandboxId:6b5d45256b5732bcdd42f67c430b771dea6e25a6c3d5530705a5543d4904e0f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718225884354541310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc
5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.container.hash: 5a0dd458,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0,PodSandboxId:b09fab012871969680122263723e9e7810048137c6bd1be1640fe928263093cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718225884317379092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85,PodSandboxId:cb24e84e5e4faef0a1d547f72a678c388fb91c90f9b6b7e8fd8e07a31043ca75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718225884330858527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa350923-9f9a-4c6d-a9d6-7906fd1ca5c5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.399002644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b7d2b57-1b48-4b20-881f-d927d255318e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.399083426Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b7d2b57-1b48-4b20-881f-d927d255318e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.400166191Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f8668db-5964-412a-b3e8-fe5e369fe658 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.400574579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718226489400555792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f8668db-5964-412a-b3e8-fe5e369fe658 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.401202257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a105ae71-4ecd-4928-bdf5-064ab8110b92 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.401255736Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a105ae71-4ecd-4928-bdf5-064ab8110b92 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.401745958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e356af2991acd35e8c5e1010c2edcfafcdaa44202f7a7de1f64fdcb129b1cb97,PodSandboxId:2765d8d89dc60b11465338bb625cf83233ae0c47977122526dda4e2c3eb8de0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718226302132060909,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e15df1e1c381db7fd134e2b814595d42af6ae8a54981cc908a49c53c4a1bb9,PodSandboxId:9f85e7d9a139355a4d11c93ef8d33423360aa0622ee97fb6e5a8846239efb0c1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718226268653878749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00d5ce7c2be9f85077bc8e0388d9fa32ba1bda0561e11f78b247f01d99da3d6,PodSandboxId:d5431b2fdc6ccb5032e7e75dee7a0bcdc31ee038c99fa77cb164f49b50497852,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718226268505600075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48723a00f68034b2b9157bc84da729cb2ba5698b870150e02f80d3c7e1621aae,PodSandboxId:835ea78f2a30143650553e31633dddd64d8b30b2506bed7f27aea0cb8bf3a695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718226268445869395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1282d310fbf74c5662700466fab7cb94876f3856f4651e5f83284f2361bd8724,PodSandboxId:2377cfd1f1177a0030b2481ab2e4ad7abf93a225046e0458e3e1fbb8b2a3da91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718226268350438967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f189b9415f0984871a7f457c39dda70e32109051b8c0727a20cbd483bb4e9c8c,PodSandboxId:34f76e53eae69485c9673bb9813104abd4aeabf142b53b0ee79e0f471b99cc02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718226263608310806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5a0dd458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeba3ac7698f6380d6082ee5c673f572a710e176fb3a3d5dc6b43dfb7bb4130c,PodSandboxId:b9069b6210b629e3d6551e12622c1deadccfb1e5282b8305a936196343dc7e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718226263537240543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]string{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01295b32b6815016713b036abc654cee51e14f9aba50c15ab21f991e5ea1bac3,PodSandboxId:b1f7f88ba9fcf4d8c5436e7c2b210e62b6b270bd56b89dd703af6152ddf286a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718226263492477646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467c4660de162c74d8bc29ebfdaebba7594ac023fa9d24a9cf66e9bbf967f960,PodSandboxId:b52346ae1ba5d421641ac822dcfa3dad8012e8185faab1a12b5318e8e6d999d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718226263472607810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3e772291419e269d7d384e11d755c67e1382c12181adfa9479ea1f2d722dee,PodSandboxId:27760f2e721b4cabd837e0e00013bfb9abfc74b3640aebacc2c0dc2c6f63291d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718225956372753821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a,PodSandboxId:747bf00d4dc3c16fb5474ececcbda50427fd76c65921e57da775b0343ac22a12,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718225909231185107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13,PodSandboxId:28629b256cb1868a1ac54f06575a79eb7f183a2451add37a0f1a6b4c33e855cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718225909175177713,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.kubernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c,PodSandboxId:b8a668a1284ee4597b6e3789502bc8ef03720dd341f92e3ea388cf996f7b0a4a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718225907816411305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906,PodSandboxId:b5f91e0ef8f81e93939cf8164692777100c10dfb5084c1d282f6a852c7a5d430,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718225904083066654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce,PodSandboxId:e5bd299b8eaf1d06cd44d6ddacc1fef873a8b45636dd63f5e5b6848973158413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718225884347734931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf,PodSandboxId:6b5d45256b5732bcdd42f67c430b771dea6e25a6c3d5530705a5543d4904e0f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718225884354541310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc
5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.container.hash: 5a0dd458,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0,PodSandboxId:b09fab012871969680122263723e9e7810048137c6bd1be1640fe928263093cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718225884317379092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85,PodSandboxId:cb24e84e5e4faef0a1d547f72a678c388fb91c90f9b6b7e8fd8e07a31043ca75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718225884330858527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a105ae71-4ecd-4928-bdf5-064ab8110b92 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.443590763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10d528a4-6212-415b-92f0-c897a1810805 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.443693594Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10d528a4-6212-415b-92f0-c897a1810805 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.444831735Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2c0b211-304b-4a46-a7aa-77b91089e291 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.445565832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718226489445533044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2c0b211-304b-4a46-a7aa-77b91089e291 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.446334568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdadd64f-3324-4c6d-9135-9cde574d98b9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.446390112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdadd64f-3324-4c6d-9135-9cde574d98b9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.446843130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e356af2991acd35e8c5e1010c2edcfafcdaa44202f7a7de1f64fdcb129b1cb97,PodSandboxId:2765d8d89dc60b11465338bb625cf83233ae0c47977122526dda4e2c3eb8de0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718226302132060909,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e15df1e1c381db7fd134e2b814595d42af6ae8a54981cc908a49c53c4a1bb9,PodSandboxId:9f85e7d9a139355a4d11c93ef8d33423360aa0622ee97fb6e5a8846239efb0c1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718226268653878749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00d5ce7c2be9f85077bc8e0388d9fa32ba1bda0561e11f78b247f01d99da3d6,PodSandboxId:d5431b2fdc6ccb5032e7e75dee7a0bcdc31ee038c99fa77cb164f49b50497852,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718226268505600075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48723a00f68034b2b9157bc84da729cb2ba5698b870150e02f80d3c7e1621aae,PodSandboxId:835ea78f2a30143650553e31633dddd64d8b30b2506bed7f27aea0cb8bf3a695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718226268445869395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1282d310fbf74c5662700466fab7cb94876f3856f4651e5f83284f2361bd8724,PodSandboxId:2377cfd1f1177a0030b2481ab2e4ad7abf93a225046e0458e3e1fbb8b2a3da91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718226268350438967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f189b9415f0984871a7f457c39dda70e32109051b8c0727a20cbd483bb4e9c8c,PodSandboxId:34f76e53eae69485c9673bb9813104abd4aeabf142b53b0ee79e0f471b99cc02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718226263608310806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5a0dd458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeba3ac7698f6380d6082ee5c673f572a710e176fb3a3d5dc6b43dfb7bb4130c,PodSandboxId:b9069b6210b629e3d6551e12622c1deadccfb1e5282b8305a936196343dc7e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718226263537240543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]string{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01295b32b6815016713b036abc654cee51e14f9aba50c15ab21f991e5ea1bac3,PodSandboxId:b1f7f88ba9fcf4d8c5436e7c2b210e62b6b270bd56b89dd703af6152ddf286a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718226263492477646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467c4660de162c74d8bc29ebfdaebba7594ac023fa9d24a9cf66e9bbf967f960,PodSandboxId:b52346ae1ba5d421641ac822dcfa3dad8012e8185faab1a12b5318e8e6d999d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718226263472607810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3e772291419e269d7d384e11d755c67e1382c12181adfa9479ea1f2d722dee,PodSandboxId:27760f2e721b4cabd837e0e00013bfb9abfc74b3640aebacc2c0dc2c6f63291d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718225956372753821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a,PodSandboxId:747bf00d4dc3c16fb5474ececcbda50427fd76c65921e57da775b0343ac22a12,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718225909231185107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13,PodSandboxId:28629b256cb1868a1ac54f06575a79eb7f183a2451add37a0f1a6b4c33e855cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718225909175177713,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.kubernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c,PodSandboxId:b8a668a1284ee4597b6e3789502bc8ef03720dd341f92e3ea388cf996f7b0a4a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718225907816411305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906,PodSandboxId:b5f91e0ef8f81e93939cf8164692777100c10dfb5084c1d282f6a852c7a5d430,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718225904083066654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce,PodSandboxId:e5bd299b8eaf1d06cd44d6ddacc1fef873a8b45636dd63f5e5b6848973158413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718225884347734931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf,PodSandboxId:6b5d45256b5732bcdd42f67c430b771dea6e25a6c3d5530705a5543d4904e0f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718225884354541310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc
5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.container.hash: 5a0dd458,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0,PodSandboxId:b09fab012871969680122263723e9e7810048137c6bd1be1640fe928263093cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718225884317379092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85,PodSandboxId:cb24e84e5e4faef0a1d547f72a678c388fb91c90f9b6b7e8fd8e07a31043ca75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718225884330858527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bdadd64f-3324-4c6d-9135-9cde574d98b9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.486602181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fd9dad5-0fc4-43e8-b10f-321e5ca5c69e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.486680684Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fd9dad5-0fc4-43e8-b10f-321e5ca5c69e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.488064903Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d21a21df-48f4-4aeb-b195-874b4e2f086c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.488820298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718226489488790703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d21a21df-48f4-4aeb-b195-874b4e2f086c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.491287056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a549d5bc-d33b-47ed-a03d-4e4ab8810c82 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.491370637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a549d5bc-d33b-47ed-a03d-4e4ab8810c82 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:08:09 multinode-991051 crio[2858]: time="2024-06-12 21:08:09.492081534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e356af2991acd35e8c5e1010c2edcfafcdaa44202f7a7de1f64fdcb129b1cb97,PodSandboxId:2765d8d89dc60b11465338bb625cf83233ae0c47977122526dda4e2c3eb8de0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718226302132060909,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46e15df1e1c381db7fd134e2b814595d42af6ae8a54981cc908a49c53c4a1bb9,PodSandboxId:9f85e7d9a139355a4d11c93ef8d33423360aa0622ee97fb6e5a8846239efb0c1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718226268653878749,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00d5ce7c2be9f85077bc8e0388d9fa32ba1bda0561e11f78b247f01d99da3d6,PodSandboxId:d5431b2fdc6ccb5032e7e75dee7a0bcdc31ee038c99fa77cb164f49b50497852,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718226268505600075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48723a00f68034b2b9157bc84da729cb2ba5698b870150e02f80d3c7e1621aae,PodSandboxId:835ea78f2a30143650553e31633dddd64d8b30b2506bed7f27aea0cb8bf3a695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718226268445869395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1282d310fbf74c5662700466fab7cb94876f3856f4651e5f83284f2361bd8724,PodSandboxId:2377cfd1f1177a0030b2481ab2e4ad7abf93a225046e0458e3e1fbb8b2a3da91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718226268350438967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.ku
bernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f189b9415f0984871a7f457c39dda70e32109051b8c0727a20cbd483bb4e9c8c,PodSandboxId:34f76e53eae69485c9673bb9813104abd4aeabf142b53b0ee79e0f471b99cc02,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718226263608310806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5a0dd458,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeba3ac7698f6380d6082ee5c673f572a710e176fb3a3d5dc6b43dfb7bb4130c,PodSandboxId:b9069b6210b629e3d6551e12622c1deadccfb1e5282b8305a936196343dc7e79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718226263537240543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]string{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01295b32b6815016713b036abc654cee51e14f9aba50c15ab21f991e5ea1bac3,PodSandboxId:b1f7f88ba9fcf4d8c5436e7c2b210e62b6b270bd56b89dd703af6152ddf286a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718226263492477646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467c4660de162c74d8bc29ebfdaebba7594ac023fa9d24a9cf66e9bbf967f960,PodSandboxId:b52346ae1ba5d421641ac822dcfa3dad8012e8185faab1a12b5318e8e6d999d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718226263472607810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a3e772291419e269d7d384e11d755c67e1382c12181adfa9479ea1f2d722dee,PodSandboxId:27760f2e721b4cabd837e0e00013bfb9abfc74b3640aebacc2c0dc2c6f63291d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718225956372753821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-846cm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f3f0e5b-62aa-4a06-8b50-45de75f7c9df,},Annotations:map[string]string{io.kubernetes.container.hash: 3ee98f61,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a,PodSandboxId:747bf00d4dc3c16fb5474ececcbda50427fd76c65921e57da775b0343ac22a12,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718225909231185107,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bfxk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2029f4-e926-41da-8fbc-b6cf94d25ad9,},Annotations:map[string]string{io.kubernetes.container.hash: dae9775b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5444a9801baa417feaec95ab2d88e718edc11b32229d9c81ed1fc47ca3eb5c13,PodSandboxId:28629b256cb1868a1ac54f06575a79eb7f183a2451add37a0f1a6b4c33e855cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718225909175177713,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1da33189-d542-48a2-a11a-67720a303a16,},Annotations:map[string]string{io.kubernetes.container.hash: 47942fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c,PodSandboxId:b8a668a1284ee4597b6e3789502bc8ef03720dd341f92e3ea388cf996f7b0a4a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718225907816411305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f72hp,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e,},Annotations:map[string]string{io.kubernetes.container.hash: b0e9f629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906,PodSandboxId:b5f91e0ef8f81e93939cf8164692777100c10dfb5084c1d282f6a852c7a5d430,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718225904083066654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqg55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 6f905a86,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce,PodSandboxId:e5bd299b8eaf1d06cd44d6ddacc1fef873a8b45636dd63f5e5b6848973158413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718225884347734931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
969bba38d22785253409acfd4d32bf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf,PodSandboxId:6b5d45256b5732bcdd42f67c430b771dea6e25a6c3d5530705a5543d4904e0f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718225884354541310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac465b2fbc69d8dc
5f521a4275b2a26,},Annotations:map[string]string{io.kubernetes.container.hash: 5a0dd458,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0,PodSandboxId:b09fab012871969680122263723e9e7810048137c6bd1be1640fe928263093cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718225884317379092,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0a810eaa25137a02b499d4ae5d28e9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1baa530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85,PodSandboxId:cb24e84e5e4faef0a1d547f72a678c388fb91c90f9b6b7e8fd8e07a31043ca75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718225884330858527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-991051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f66a3a9f00e1fa2e05a8b5d9d430ad,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a549d5bc-d33b-47ed-a03d-4e4ab8810c82 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e356af2991acd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   2765d8d89dc60       busybox-fc5497c4f-846cm
	46e15df1e1c38       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               1                   9f85e7d9a1393       kindnet-f72hp
	b00d5ce7c2be9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   d5431b2fdc6cc       coredns-7db6d8ff4d-bfxk2
	48723a00f6803       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      3 minutes ago       Running             kube-proxy                1                   835ea78f2a301       kube-proxy-nqg55
	1282d310fbf74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   2377cfd1f1177       storage-provisioner
	f189b9415f098       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago       Running             kube-apiserver            1                   34f76e53eae69       kube-apiserver-multinode-991051
	eeba3ac7698f6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   b9069b6210b62       etcd-multinode-991051
	01295b32b6815       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago       Running             kube-controller-manager   1                   b1f7f88ba9fcf       kube-controller-manager-multinode-991051
	467c4660de162       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      3 minutes ago       Running             kube-scheduler            1                   b52346ae1ba5d       kube-scheduler-multinode-991051
	7a3e772291419       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   27760f2e721b4       busybox-fc5497c4f-846cm
	55c89de09a94c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   747bf00d4dc3c       coredns-7db6d8ff4d-bfxk2
	5444a9801baa4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   28629b256cb18       storage-provisioner
	98f8978fdf745       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    9 minutes ago       Exited              kindnet-cni               0                   b8a668a1284ee       kindnet-f72hp
	2388fa10173fb       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      9 minutes ago       Exited              kube-proxy                0                   b5f91e0ef8f81       kube-proxy-nqg55
	e8bdc02b5de3e       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      10 minutes ago      Exited              kube-apiserver            0                   6b5d45256b573       kube-apiserver-multinode-991051
	3ae9672be2634       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      10 minutes ago      Exited              kube-scheduler            0                   e5bd299b8eaf1       kube-scheduler-multinode-991051
	3280d415399d2       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      10 minutes ago      Exited              kube-controller-manager   0                   cb24e84e5e4fa       kube-controller-manager-multinode-991051
	40967dcc01791       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   b09fab0128719       etcd-multinode-991051
	
	
	==> coredns [55c89de09a94cc863ff747da4ec19a23f20c354694f2ecfdff2e685ac2e65f3a] <==
	[INFO] 10.244.1.2:43745 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001847823s
	[INFO] 10.244.1.2:54879 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017707s
	[INFO] 10.244.1.2:33959 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105818s
	[INFO] 10.244.1.2:48862 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001244099s
	[INFO] 10.244.1.2:40661 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118494s
	[INFO] 10.244.1.2:58412 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077076s
	[INFO] 10.244.1.2:56989 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126649s
	[INFO] 10.244.0.3:43521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205979s
	[INFO] 10.244.0.3:54272 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070708s
	[INFO] 10.244.0.3:36006 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007132s
	[INFO] 10.244.0.3:57978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008134s
	[INFO] 10.244.1.2:50155 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131674s
	[INFO] 10.244.1.2:48107 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010002s
	[INFO] 10.244.1.2:33900 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007647s
	[INFO] 10.244.1.2:50036 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083568s
	[INFO] 10.244.0.3:56545 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154865s
	[INFO] 10.244.0.3:45508 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082938s
	[INFO] 10.244.0.3:50626 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112798s
	[INFO] 10.244.0.3:60306 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097559s
	[INFO] 10.244.1.2:38281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017063s
	[INFO] 10.244.1.2:41878 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133372s
	[INFO] 10.244.1.2:48515 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138159s
	[INFO] 10.244.1.2:54207 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121398s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b00d5ce7c2be9f85077bc8e0388d9fa32ba1bda0561e11f78b247f01d99da3d6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35213 - 51609 "HINFO IN 6562696624659742763.4870241254649022123. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016738574s
	
	
	==> describe nodes <==
	Name:               multinode-991051
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-991051
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=multinode-991051
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T20_58_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:58:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-991051
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:08:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:04:27 +0000   Wed, 12 Jun 2024 20:58:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:04:27 +0000   Wed, 12 Jun 2024 20:58:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:04:27 +0000   Wed, 12 Jun 2024 20:58:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:04:27 +0000   Wed, 12 Jun 2024 20:58:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    multinode-991051
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0768626dc5484c468fac8e9844f6eea4
	  System UUID:                0768626d-c548-4c46-8fac-8e9844f6eea4
	  Boot ID:                    1c4632eb-6f97-4dc1-98a0-c709cb774373
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-846cm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 coredns-7db6d8ff4d-bfxk2                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m46s
	  kube-system                 etcd-multinode-991051                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-f72hp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m46s
	  kube-system                 kube-apiserver-multinode-991051             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-991051    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-nqg55                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	  kube-system                 kube-scheduler-multinode-991051             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-991051 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-991051 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-991051 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m47s                  node-controller  Node multinode-991051 event: Registered Node multinode-991051 in Controller
	  Normal  NodeReady                9m41s                  kubelet          Node multinode-991051 status is now: NodeReady
	  Normal  Starting                 3m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m47s (x8 over 3m47s)  kubelet          Node multinode-991051 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m47s (x8 over 3m47s)  kubelet          Node multinode-991051 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s (x7 over 3m47s)  kubelet          Node multinode-991051 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m30s                  node-controller  Node multinode-991051 event: Registered Node multinode-991051 in Controller
	
	
	Name:               multinode-991051-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-991051-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=multinode-991051
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T21_05_06_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:05:06 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-991051-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:05:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 12 Jun 2024 21:05:36 +0000   Wed, 12 Jun 2024 21:06:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 12 Jun 2024 21:05:36 +0000   Wed, 12 Jun 2024 21:06:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 12 Jun 2024 21:05:36 +0000   Wed, 12 Jun 2024 21:06:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 12 Jun 2024 21:05:36 +0000   Wed, 12 Jun 2024 21:06:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    multinode-991051-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ffbd7e2434342f89de57b022368ba2d
	  System UUID:                1ffbd7e2-4343-42f8-9de5-7b022368ba2d
	  Boot ID:                    f15c2fb9-8fa1-42c5-8627-1b04bd417ff0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-96qct    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kindnet-nhj4r              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m10s
	  kube-system                 kube-proxy-snl29           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m                     kube-proxy       
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m10s (x2 over 9m10s)  kubelet          Node multinode-991051-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m10s (x2 over 9m10s)  kubelet          Node multinode-991051-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m10s (x2 over 9m10s)  kubelet          Node multinode-991051-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m                     kubelet          Node multinode-991051-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)    kubelet          Node multinode-991051-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)    kubelet          Node multinode-991051-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)    kubelet          Node multinode-991051-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m55s                  kubelet          Node multinode-991051-m02 status is now: NodeReady
	  Normal  NodeNotReady             100s                   node-controller  Node multinode-991051-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.063790] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.170111] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.146708] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.256620] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.202304] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[Jun12 20:58] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.059176] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.002826] systemd-fstab-generator[1272]: Ignoring "noauto" option for root device
	[  +0.085143] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.362582] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.752240] systemd-fstab-generator[1469]: Ignoring "noauto" option for root device
	[  +5.162304] kauditd_printk_skb: 57 callbacks suppressed
	[Jun12 20:59] kauditd_printk_skb: 15 callbacks suppressed
	[Jun12 21:04] systemd-fstab-generator[2776]: Ignoring "noauto" option for root device
	[  +0.157188] systemd-fstab-generator[2789]: Ignoring "noauto" option for root device
	[  +0.175058] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.144685] systemd-fstab-generator[2816]: Ignoring "noauto" option for root device
	[  +0.299987] systemd-fstab-generator[2844]: Ignoring "noauto" option for root device
	[  +5.977888] systemd-fstab-generator[2942]: Ignoring "noauto" option for root device
	[  +0.087497] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.007294] systemd-fstab-generator[3067]: Ignoring "noauto" option for root device
	[  +5.659838] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.356875] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.786908] systemd-fstab-generator[3885]: Ignoring "noauto" option for root device
	[Jun12 21:05] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [40967dcc017916934d08c71706f88dd7901b682671677d7cbf4b369fc15930c0] <==
	{"level":"info","ts":"2024-06-12T20:58:05.48004Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-12T20:58:05.541828Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.222:2379"}
	{"level":"info","ts":"2024-06-12T20:58:59.629967Z","caller":"traceutil/trace.go:171","msg":"trace[1762105491] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:463; }","duration":"191.675434ms","start":"2024-06-12T20:58:59.438265Z","end":"2024-06-12T20:58:59.62994Z","steps":["trace[1762105491] 'read index received'  (duration: 128.54146ms)","trace[1762105491] 'applied index is now lower than readState.Index'  (duration: 63.133091ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-12T20:58:59.630163Z","caller":"traceutil/trace.go:171","msg":"trace[1100956355] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"227.451386ms","start":"2024-06-12T20:58:59.402704Z","end":"2024-06-12T20:58:59.630155Z","steps":["trace[1100956355] 'process raft request'  (duration: 164.094602ms)","trace[1100956355] 'compare'  (duration: 63.033578ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T20:58:59.630479Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.099593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-12T20:58:59.63061Z","caller":"traceutil/trace.go:171","msg":"trace[1449844035] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:443; }","duration":"192.415127ms","start":"2024-06-12T20:58:59.438179Z","end":"2024-06-12T20:58:59.630594Z","steps":["trace[1449844035] 'agreement among raft nodes before linearized reading'  (duration: 192.106437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:58:59.630659Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.842613ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-06-12T20:58:59.630725Z","caller":"traceutil/trace.go:171","msg":"trace[1893418406] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:444; }","duration":"124.979757ms","start":"2024-06-12T20:58:59.505737Z","end":"2024-06-12T20:58:59.630717Z","steps":["trace[1893418406] 'agreement among raft nodes before linearized reading'  (duration: 124.84517ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:58:59.630846Z","caller":"traceutil/trace.go:171","msg":"trace[452590583] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"190.640687ms","start":"2024-06-12T20:58:59.4402Z","end":"2024-06-12T20:58:59.63084Z","steps":["trace[452590583] 'process raft request'  (duration: 190.335178ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:59:04.110076Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.130319ms","expected-duration":"100ms","prefix":"","request":"header:<ID:694275683418241703 > lease_revoke:<id:09a2900e3e55fa25>","response":"size:28"}
	{"level":"info","ts":"2024-06-12T20:59:04.11021Z","caller":"traceutil/trace.go:171","msg":"trace[778719727] linearizableReadLoop","detail":"{readStateIndex:501; appliedIndex:500; }","duration":"285.186144ms","start":"2024-06-12T20:59:03.825011Z","end":"2024-06-12T20:59:04.110197Z","steps":["trace[778719727] 'read index received'  (duration: 31.875µs)","trace[778719727] 'applied index is now lower than readState.Index'  (duration: 285.153032ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T20:59:04.110279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.285139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-991051-m02\" ","response":"range_response_count:1 size:3022"}
	{"level":"info","ts":"2024-06-12T20:59:04.110312Z","caller":"traceutil/trace.go:171","msg":"trace[207300915] range","detail":"{range_begin:/registry/minions/multinode-991051-m02; range_end:; response_count:1; response_revision:476; }","duration":"285.353831ms","start":"2024-06-12T20:59:03.824952Z","end":"2024-06-12T20:59:04.110306Z","steps":["trace[207300915] 'agreement among raft nodes before linearized reading'  (duration: 285.276051ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:59:47.984442Z","caller":"traceutil/trace.go:171","msg":"trace[21608255] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"241.589179ms","start":"2024-06-12T20:59:47.742809Z","end":"2024-06-12T20:59:47.984399Z","steps":["trace[21608255] 'process raft request'  (duration: 234.076912ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:59:47.984763Z","caller":"traceutil/trace.go:171","msg":"trace[880145146] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"198.263278ms","start":"2024-06-12T20:59:47.786483Z","end":"2024-06-12T20:59:47.984746Z","steps":["trace[880145146] 'process raft request'  (duration: 197.800018ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T21:02:42.361495Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-12T21:02:42.361682Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-991051","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.222:2380"],"advertise-client-urls":["https://192.168.39.222:2379"]}
	{"level":"warn","ts":"2024-06-12T21:02:42.361829Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-12T21:02:42.361926Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-12T21:02:42.421297Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.222:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-12T21:02:42.421336Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.222:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-12T21:02:42.422752Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d8a7e113a49009a2","current-leader-member-id":"d8a7e113a49009a2"}
	{"level":"info","ts":"2024-06-12T21:02:42.426828Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2024-06-12T21:02:42.426929Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2024-06-12T21:02:42.426941Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-991051","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.222:2380"],"advertise-client-urls":["https://192.168.39.222:2379"]}
	
	
	==> etcd [eeba3ac7698f6380d6082ee5c673f572a710e176fb3a3d5dc6b43dfb7bb4130c] <==
	{"level":"info","ts":"2024-06-12T21:04:23.965374Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T21:04:23.965391Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T21:04:23.965661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 switched to configuration voters=(15611694107784645026)"}
	{"level":"info","ts":"2024-06-12T21:04:23.965713Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","added-peer-id":"d8a7e113a49009a2","added-peer-peer-urls":["https://192.168.39.222:2380"]}
	{"level":"info","ts":"2024-06-12T21:04:23.965848Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:04:23.965868Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:04:23.97409Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T21:04:23.974433Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d8a7e113a49009a2","initial-advertise-peer-urls":["https://192.168.39.222:2380"],"listen-peer-urls":["https://192.168.39.222:2380"],"advertise-client-urls":["https://192.168.39.222:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.222:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-12T21:04:23.974467Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-12T21:04:23.974573Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2024-06-12T21:04:23.974579Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2024-06-12T21:04:25.505866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-12T21:04:25.505927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-12T21:04:25.505963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgPreVoteResp from d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2024-06-12T21:04:25.505975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became candidate at term 3"}
	{"level":"info","ts":"2024-06-12T21:04:25.505981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgVoteResp from d8a7e113a49009a2 at term 3"}
	{"level":"info","ts":"2024-06-12T21:04:25.505988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became leader at term 3"}
	{"level":"info","ts":"2024-06-12T21:04:25.506015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d8a7e113a49009a2 elected leader d8a7e113a49009a2 at term 3"}
	{"level":"info","ts":"2024-06-12T21:04:25.513411Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d8a7e113a49009a2","local-member-attributes":"{Name:multinode-991051 ClientURLs:[https://192.168.39.222:2379]}","request-path":"/0/members/d8a7e113a49009a2/attributes","cluster-id":"26257d506d5fabfb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-12T21:04:25.513558Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:04:25.515544Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-12T21:04:25.517193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:04:25.517376Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T21:04:25.517404Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-12T21:04:25.518774Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.222:2379"}
	
	
	==> kernel <==
	 21:08:09 up 10 min,  0 users,  load average: 0.50, 0.42, 0.22
	Linux multinode-991051 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [46e15df1e1c381db7fd134e2b814595d42af6ae8a54981cc908a49c53c4a1bb9] <==
	I0612 21:07:09.671557       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:07:19.684829       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:07:19.684877       1 main.go:227] handling current node
	I0612 21:07:19.684908       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:07:19.684914       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:07:29.690060       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:07:29.690146       1 main.go:227] handling current node
	I0612 21:07:29.690161       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:07:29.690167       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:07:39.706497       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:07:39.706620       1 main.go:227] handling current node
	I0612 21:07:39.706711       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:07:39.706719       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:07:49.715236       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:07:49.715281       1 main.go:227] handling current node
	I0612 21:07:49.715296       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:07:49.715303       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:07:59.721646       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:07:59.721681       1 main.go:227] handling current node
	I0612 21:07:59.721691       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:07:59.721696       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:08:09.734542       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:08:09.734590       1 main.go:227] handling current node
	I0612 21:08:09.734601       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:08:09.734606       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [98f8978fdf74512b23844eeef590cf9687d0dc616691561f425007b8c60de24c] <==
	I0612 21:01:58.698593       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:02:08.704204       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:02:08.704248       1 main.go:227] handling current node
	I0612 21:02:08.704259       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:02:08.704264       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:02:08.704395       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:02:08.704417       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:02:18.709074       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:02:18.709237       1 main.go:227] handling current node
	I0612 21:02:18.709279       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:02:18.709300       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:02:18.709454       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:02:18.709475       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:02:28.777315       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:02:28.777415       1 main.go:227] handling current node
	I0612 21:02:28.777440       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:02:28.777457       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:02:28.777592       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:02:28.777613       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	I0612 21:02:38.787258       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0612 21:02:38.787511       1 main.go:227] handling current node
	I0612 21:02:38.787556       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0612 21:02:38.787581       1 main.go:250] Node multinode-991051-m02 has CIDR [10.244.1.0/24] 
	I0612 21:02:38.787752       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0612 21:02:38.787790       1 main.go:250] Node multinode-991051-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e8bdc02b5de3e8061a405cbb7daa6d053de15008582ea77c42820564bacb2aaf] <==
	W0612 21:02:42.380978       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381008       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381048       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381231       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381270       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381362       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381416       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381509       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381574       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381625       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.381658       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.382979       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383038       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383073       1 logging.go:59] [core] [Channel #9 SubChannel #10] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383169       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383353       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383400       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383437       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383468       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383613       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383649       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383682       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383712       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383751       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0612 21:02:42.383787       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f189b9415f0984871a7f457c39dda70e32109051b8c0727a20cbd483bb4e9c8c] <==
	I0612 21:04:26.881743       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 21:04:26.881787       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 21:04:26.882967       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 21:04:26.883669       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 21:04:26.885561       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 21:04:26.885610       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 21:04:26.893035       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 21:04:26.903968       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 21:04:26.904314       1 aggregator.go:165] initial CRD sync complete...
	I0612 21:04:26.904358       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 21:04:26.904382       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 21:04:26.904405       1 cache.go:39] Caches are synced for autoregister controller
	E0612 21:04:26.912969       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0612 21:04:26.929980       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 21:04:26.938077       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 21:04:26.938142       1 policy_source.go:224] refreshing policies
	I0612 21:04:26.996918       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 21:04:27.786730       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 21:04:29.223525       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 21:04:29.353922       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 21:04:29.365063       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 21:04:29.425690       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 21:04:29.432599       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 21:04:39.617783       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0612 21:04:39.667613       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [01295b32b6815016713b036abc654cee51e14f9aba50c15ab21f991e5ea1bac3] <==
	I0612 21:05:06.180445       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-991051-m02" podCIDRs=["10.244.1.0/24"]
	I0612 21:05:07.041451       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.085µs"
	I0612 21:05:07.093579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.963µs"
	I0612 21:05:07.108606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.872µs"
	I0612 21:05:07.112174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.924µs"
	I0612 21:05:07.123003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.035µs"
	I0612 21:05:07.131727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.382µs"
	I0612 21:05:10.699809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.489µs"
	I0612 21:05:14.582943       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:05:14.597862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.188µs"
	I0612 21:05:14.609513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.38µs"
	I0612 21:05:18.564487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.768761ms"
	I0612 21:05:18.565917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.599µs"
	I0612 21:05:32.655363       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:05:33.918770       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:05:33.919364       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-991051-m03\" does not exist"
	I0612 21:05:33.930170       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-991051-m03" podCIDRs=["10.244.2.0/24"]
	I0612 21:05:42.875190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:05:48.459488       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:06:29.491312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.094369ms"
	I0612 21:06:29.492088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.09µs"
	I0612 21:06:39.394168       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-6ds8j"
	I0612 21:06:39.431557       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-6ds8j"
	I0612 21:06:39.431607       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lf7jn"
	I0612 21:06:39.452427       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lf7jn"
	
	
	==> kube-controller-manager [3280d415399d241dd67375b235ecd4588814568e5e825a7ffdba48158bea7c85] <==
	I0612 20:58:59.634944       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-991051-m02\" does not exist"
	I0612 20:58:59.665590       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-991051-m02" podCIDRs=["10.244.1.0/24"]
	I0612 20:59:02.498018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-991051-m02"
	I0612 20:59:09.727052       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 20:59:11.896081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.704642ms"
	I0612 20:59:11.917585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.377762ms"
	I0612 20:59:11.917662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.664µs"
	I0612 20:59:11.917948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.385µs"
	I0612 20:59:15.435836       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.512386ms"
	I0612 20:59:15.436081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.819µs"
	I0612 20:59:16.811412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.369381ms"
	I0612 20:59:16.811492       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.522µs"
	I0612 20:59:47.987906       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 20:59:47.988022       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-991051-m03\" does not exist"
	I0612 20:59:48.017510       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-991051-m03" podCIDRs=["10.244.2.0/24"]
	I0612 20:59:52.514623       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-991051-m03"
	I0612 20:59:57.337307       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:00:25.808670       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:00:26.928549       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:00:26.928598       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-991051-m03\" does not exist"
	I0612 21:00:26.947307       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-991051-m03" podCIDRs=["10.244.3.0/24"]
	I0612 21:00:36.183578       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m02"
	I0612 21:01:12.567682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-991051-m03"
	I0612 21:01:12.619601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.178179ms"
	I0612 21:01:12.619823       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.857µs"
	
	
	==> kube-proxy [2388fa10173fb8f675b905600b8b657a7329203a4b98c3e612c5c01c94269906] <==
	I0612 20:58:24.422475       1 server_linux.go:69] "Using iptables proxy"
	I0612 20:58:24.436576       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.222"]
	I0612 20:58:24.526223       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:58:24.526288       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:58:24.526305       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:58:24.529978       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:58:24.530233       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:58:24.530265       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:58:24.531940       1 config.go:192] "Starting service config controller"
	I0612 20:58:24.531972       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:58:24.532000       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:58:24.532004       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:58:24.534602       1 config.go:319] "Starting node config controller"
	I0612 20:58:24.534635       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:58:24.632324       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 20:58:24.632402       1 shared_informer.go:320] Caches are synced for service config
	I0612 20:58:24.635448       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [48723a00f68034b2b9157bc84da729cb2ba5698b870150e02f80d3c7e1621aae] <==
	I0612 21:04:28.741732       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:04:28.757931       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.222"]
	I0612 21:04:28.849421       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:04:28.849471       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:04:28.849489       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:04:28.854521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:04:28.854724       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:04:28.854737       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:04:28.864268       1 config.go:192] "Starting service config controller"
	I0612 21:04:28.864291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:04:28.864350       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:04:28.864354       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:04:28.864846       1 config.go:319] "Starting node config controller"
	I0612 21:04:28.864854       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:04:28.968543       1 shared_informer.go:320] Caches are synced for node config
	I0612 21:04:28.968574       1 shared_informer.go:320] Caches are synced for service config
	I0612 21:04:28.968626       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3ae9672be263494df9fd7a011d1621f35c8cafd2080af8bdc740e73f7fa580ce] <==
	E0612 20:58:07.195337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 20:58:07.198352       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 20:58:07.198503       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 20:58:08.027001       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0612 20:58:08.027030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0612 20:58:08.033306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 20:58:08.033332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 20:58:08.057282       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0612 20:58:08.057354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0612 20:58:08.072288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0612 20:58:08.072389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0612 20:58:08.164741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0612 20:58:08.164865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0612 20:58:08.180593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 20:58:08.180682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 20:58:08.223328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0612 20:58:08.223523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0612 20:58:08.261784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:58:08.261812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0612 20:58:08.374494       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 20:58:08.375052       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 20:58:08.408945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 20:58:08.409392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 20:58:10.182710       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0612 21:02:42.357613       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [467c4660de162c74d8bc29ebfdaebba7594ac023fa9d24a9cf66e9bbf967f960] <==
	I0612 21:04:24.759223       1 serving.go:380] Generated self-signed cert in-memory
	W0612 21:04:26.818651       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0612 21:04:26.818693       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 21:04:26.818703       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0612 21:04:26.818709       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 21:04:26.863641       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 21:04:26.863688       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:04:26.867454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 21:04:26.867601       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 21:04:26.867635       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 21:04:26.867660       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 21:04:26.968220       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.834365    3074 topology_manager.go:215] "Topology Admit Handler" podUID="1da33189-d542-48a2-a11a-67720a303a16" podNamespace="kube-system" podName="storage-provisioner"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.834506    3074 topology_manager.go:215] "Topology Admit Handler" podUID="8f3f0e5b-62aa-4a06-8b50-45de75f7c9df" podNamespace="default" podName="busybox-fc5497c4f-846cm"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.844441    3074 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.907719    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1da33189-d542-48a2-a11a-67720a303a16-tmp\") pod \"storage-provisioner\" (UID: \"1da33189-d542-48a2-a11a-67720a303a16\") " pod="kube-system/storage-provisioner"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.907993    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e-xtables-lock\") pod \"kindnet-f72hp\" (UID: \"d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e\") " pod="kube-system/kindnet-f72hp"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.908034    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d-xtables-lock\") pod \"kube-proxy-nqg55\" (UID: \"2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d\") " pod="kube-system/kube-proxy-nqg55"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.908172    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d-lib-modules\") pod \"kube-proxy-nqg55\" (UID: \"2cdc7d9c-1d54-462d-9542-4a5b8ab8cc0d\") " pod="kube-system/kube-proxy-nqg55"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.908301    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e-cni-cfg\") pod \"kindnet-f72hp\" (UID: \"d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e\") " pod="kube-system/kindnet-f72hp"
	Jun 12 21:04:27 multinode-991051 kubelet[3074]: I0612 21:04:27.908398    3074 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e-lib-modules\") pod \"kindnet-f72hp\" (UID: \"d16fdb2c-a1ba-4e94-b370-0aa70bc70d0e\") " pod="kube-system/kindnet-f72hp"
	Jun 12 21:04:35 multinode-991051 kubelet[3074]: I0612 21:04:35.603526    3074 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 12 21:05:22 multinode-991051 kubelet[3074]: E0612 21:05:22.903681    3074 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:05:22 multinode-991051 kubelet[3074]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:05:22 multinode-991051 kubelet[3074]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:05:22 multinode-991051 kubelet[3074]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:05:22 multinode-991051 kubelet[3074]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:06:22 multinode-991051 kubelet[3074]: E0612 21:06:22.903755    3074 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:06:22 multinode-991051 kubelet[3074]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:06:22 multinode-991051 kubelet[3074]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:06:22 multinode-991051 kubelet[3074]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:06:22 multinode-991051 kubelet[3074]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:07:22 multinode-991051 kubelet[3074]: E0612 21:07:22.912202    3074 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:07:22 multinode-991051 kubelet[3074]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:07:22 multinode-991051 kubelet[3074]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:07:22 multinode-991051 kubelet[3074]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:07:22 multinode-991051 kubelet[3074]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:08:09.085834   52836 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17779-14199/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-991051 -n multinode-991051
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-991051 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.30s)

                                                
                                    
x
+
TestPreload (213.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-851055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-851055 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m41.758485093s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-851055 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-851055 image pull gcr.io/k8s-minikube/busybox: (2.963380524s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-851055
E0612 21:14:39.752207   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-851055: (8.288655056s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-851055 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0612 21:14:56.704151   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
preload_test.go:66: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-851055 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: exit status 80 (39.705181485s)

                                                
                                                
-- stdout --
	* [test-preload-851055] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the kvm2 driver based on existing profile
	* Starting "test-preload-851055" primary control-plane node in "test-preload-851055" cluster
	* Downloading Kubernetes v1.24.4 preload ...
	* Restarting existing kvm2 VM for "test-preload-851055" ...
	* Updating the running kvm2 "test-preload-851055" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 21:14:41.708012   55459 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:14:41.708258   55459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:14:41.708267   55459 out.go:304] Setting ErrFile to fd 2...
	I0612 21:14:41.708271   55459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:14:41.708427   55459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:14:41.708888   55459 out.go:298] Setting JSON to false
	I0612 21:14:41.709713   55459 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7027,"bootTime":1718219855,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:14:41.709768   55459 start.go:139] virtualization: kvm guest
	I0612 21:14:41.712030   55459 out.go:177] * [test-preload-851055] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:14:41.713233   55459 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:14:41.713253   55459 notify.go:220] Checking for updates...
	I0612 21:14:41.714390   55459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:14:41.715634   55459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:14:41.716963   55459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:14:41.718146   55459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:14:41.719397   55459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:14:41.721005   55459 config.go:182] Loaded profile config "test-preload-851055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0612 21:14:41.721385   55459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:14:41.721429   55459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:14:41.736019   55459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I0612 21:14:41.736422   55459 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:14:41.736898   55459 main.go:141] libmachine: Using API Version  1
	I0612 21:14:41.736938   55459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:14:41.737313   55459 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:14:41.737481   55459 main.go:141] libmachine: (test-preload-851055) Calling .DriverName
	I0612 21:14:41.739350   55459 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0612 21:14:41.740562   55459 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:14:41.740827   55459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:14:41.740857   55459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:14:41.754854   55459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0612 21:14:41.755234   55459 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:14:41.755628   55459 main.go:141] libmachine: Using API Version  1
	I0612 21:14:41.755646   55459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:14:41.755923   55459 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:14:41.756122   55459 main.go:141] libmachine: (test-preload-851055) Calling .DriverName
	I0612 21:14:41.789139   55459 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:14:41.790487   55459 start.go:297] selected driver: kvm2
	I0612 21:14:41.790502   55459 start.go:901] validating driver "kvm2" against &{Name:test-preload-851055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-851055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:14:41.790628   55459 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:14:41.791330   55459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:14:41.791405   55459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:14:41.805813   55459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:14:41.806134   55459 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:14:41.806203   55459 cni.go:84] Creating CNI manager for ""
	I0612 21:14:41.806216   55459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:14:41.806277   55459 start.go:340] cluster config:
	{Name:test-preload-851055 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-851055 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:14:41.806381   55459 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:14:41.808953   55459 out.go:177] * Starting "test-preload-851055" primary control-plane node in "test-preload-851055" cluster
	I0612 21:14:41.810002   55459 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0612 21:14:41.918624   55459 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0612 21:14:41.918698   55459 cache.go:56] Caching tarball of preloaded images
	I0612 21:14:41.918908   55459 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0612 21:14:41.920803   55459 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0612 21:14:41.922266   55459 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0612 21:14:42.032483   55459 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0612 21:14:54.737305   55459 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0612 21:14:54.737403   55459 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0612 21:14:55.579631   55459 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0612 21:14:55.579780   55459 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/test-preload-851055/config.json ...
	I0612 21:14:55.591438   55459 start.go:360] acquireMachinesLock for test-preload-851055: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:14:55.591524   55459 start.go:364] duration metric: took 50.52µs to acquireMachinesLock for "test-preload-851055"
	I0612 21:14:55.591541   55459 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:14:55.591547   55459 fix.go:54] fixHost starting: 
	I0612 21:14:55.591926   55459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:14:55.591965   55459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:14:55.606879   55459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34287
	I0612 21:14:55.607415   55459 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:14:55.607938   55459 main.go:141] libmachine: Using API Version  1
	I0612 21:14:55.607976   55459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:14:55.608399   55459 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:14:55.608624   55459 main.go:141] libmachine: (test-preload-851055) Calling .DriverName
	I0612 21:14:55.608785   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetState
	I0612 21:14:55.610653   55459 fix.go:112] recreateIfNeeded on test-preload-851055: state=Stopped err=<nil>
	I0612 21:14:55.610679   55459 main.go:141] libmachine: (test-preload-851055) Calling .DriverName
	W0612 21:14:55.610850   55459 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:14:55.666957   55459 out.go:177] * Restarting existing kvm2 VM for "test-preload-851055" ...
	I0612 21:14:55.749938   55459 main.go:141] libmachine: (test-preload-851055) Calling .Start
	I0612 21:14:55.750296   55459 main.go:141] libmachine: (test-preload-851055) Ensuring networks are active...
	I0612 21:14:55.751447   55459 main.go:141] libmachine: (test-preload-851055) Ensuring network default is active
	I0612 21:14:55.751829   55459 main.go:141] libmachine: (test-preload-851055) Ensuring network mk-test-preload-851055 is active
	I0612 21:14:55.752245   55459 main.go:141] libmachine: (test-preload-851055) Getting domain xml...
	I0612 21:14:55.753007   55459 main.go:141] libmachine: (test-preload-851055) Creating domain...
	I0612 21:14:57.162496   55459 main.go:141] libmachine: (test-preload-851055) Waiting to get IP...
	I0612 21:14:57.163219   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:14:57.163545   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:14:57.163630   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:14:57.163534   55527 retry.go:31] will retry after 278.716892ms: waiting for machine to come up
	I0612 21:14:57.444079   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:14:57.444553   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:14:57.444589   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:14:57.444498   55527 retry.go:31] will retry after 373.798612ms: waiting for machine to come up
	I0612 21:14:57.820274   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:14:57.820614   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:14:57.820643   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:14:57.820560   55527 retry.go:31] will retry after 453.045864ms: waiting for machine to come up
	I0612 21:14:58.275155   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:14:58.275601   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:14:58.275617   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:14:58.275558   55527 retry.go:31] will retry after 575.703087ms: waiting for machine to come up
	I0612 21:14:58.853410   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:14:58.853873   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:14:58.853903   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:14:58.853846   55527 retry.go:31] will retry after 625.891601ms: waiting for machine to come up
	I0612 21:14:59.481778   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:14:59.482301   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:14:59.482328   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:14:59.482254   55527 retry.go:31] will retry after 658.969201ms: waiting for machine to come up
	I0612 21:15:00.143251   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:00.143746   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:15:00.143772   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:15:00.143713   55527 retry.go:31] will retry after 1.144667599s: waiting for machine to come up
	I0612 21:15:01.290472   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:01.290970   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:15:01.290993   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:15:01.290930   55527 retry.go:31] will retry after 955.780208ms: waiting for machine to come up
	I0612 21:15:02.248006   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:02.248530   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:15:02.248561   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:15:02.248478   55527 retry.go:31] will retry after 1.742755929s: waiting for machine to come up
	I0612 21:15:03.993503   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:03.993978   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:15:03.994006   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:15:03.993924   55527 retry.go:31] will retry after 1.675982293s: waiting for machine to come up
	I0612 21:15:05.671310   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:05.671819   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:15:05.671857   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:15:05.671763   55527 retry.go:31] will retry after 1.963068999s: waiting for machine to come up
	I0612 21:15:07.637826   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:07.638257   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:15:07.638281   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:15:07.638213   55527 retry.go:31] will retry after 3.572292629s: waiting for machine to come up
	I0612 21:15:11.212493   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:11.212788   55459 main.go:141] libmachine: (test-preload-851055) DBG | unable to find current IP address of domain test-preload-851055 in network mk-test-preload-851055
	I0612 21:15:11.212821   55459 main.go:141] libmachine: (test-preload-851055) DBG | I0612 21:15:11.212756   55527 retry.go:31] will retry after 3.049031705s: waiting for machine to come up
	I0612 21:15:14.265912   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.266393   55459 main.go:141] libmachine: (test-preload-851055) Found IP for machine: 192.168.39.247
	I0612 21:15:14.266422   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has current primary IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.266428   55459 main.go:141] libmachine: (test-preload-851055) Reserving static IP address...
	I0612 21:15:14.266956   55459 main.go:141] libmachine: (test-preload-851055) Reserved static IP address: 192.168.39.247
	I0612 21:15:14.266978   55459 main.go:141] libmachine: (test-preload-851055) Waiting for SSH to be available...
	I0612 21:15:14.266999   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "test-preload-851055", mac: "52:54:00:38:8f:9a", ip: "192.168.39.247"} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:14.267030   55459 main.go:141] libmachine: (test-preload-851055) DBG | skip adding static IP to network mk-test-preload-851055 - found existing host DHCP lease matching {name: "test-preload-851055", mac: "52:54:00:38:8f:9a", ip: "192.168.39.247"}
	I0612 21:15:14.267045   55459 main.go:141] libmachine: (test-preload-851055) DBG | Getting to WaitForSSH function...
	I0612 21:15:14.269288   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.269669   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:14.269701   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.269813   55459 main.go:141] libmachine: (test-preload-851055) DBG | Using SSH client type: external
	I0612 21:15:14.269838   55459 main.go:141] libmachine: (test-preload-851055) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/test-preload-851055/id_rsa (-rw-------)
	I0612 21:15:14.269860   55459 main.go:141] libmachine: (test-preload-851055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/test-preload-851055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:15:14.269867   55459 main.go:141] libmachine: (test-preload-851055) DBG | About to run SSH command:
	I0612 21:15:14.269888   55459 main.go:141] libmachine: (test-preload-851055) DBG | exit 0
	I0612 21:15:14.399355   55459 main.go:141] libmachine: (test-preload-851055) DBG | SSH cmd err, output: <nil>: 
	I0612 21:15:14.399720   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetConfigRaw
	I0612 21:15:14.400283   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetIP
	I0612 21:15:14.402491   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.402793   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:14.402814   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.403064   55459 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/test-preload-851055/config.json ...
	I0612 21:15:14.403254   55459 machine.go:94] provisionDockerMachine start ...
	I0612 21:15:14.403271   55459 main.go:141] libmachine: (test-preload-851055) Calling .DriverName
	I0612 21:15:14.403479   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:14.405589   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.406014   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:14.406045   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.406184   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHPort
	I0612 21:15:14.406384   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:14.406542   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:14.406681   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHUsername
	I0612 21:15:14.406837   55459 main.go:141] libmachine: Using SSH client type: native
	I0612 21:15:14.407017   55459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0612 21:15:14.407030   55459 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:15:14.519553   55459 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:15:14.519583   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetMachineName
	I0612 21:15:14.519807   55459 buildroot.go:166] provisioning hostname "test-preload-851055"
	I0612 21:15:14.519835   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetMachineName
	I0612 21:15:14.520045   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:14.522773   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.523068   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:14.523110   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.523271   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHPort
	I0612 21:15:14.523451   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:14.523593   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:14.523722   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHUsername
	I0612 21:15:14.523866   55459 main.go:141] libmachine: Using SSH client type: native
	I0612 21:15:14.524025   55459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0612 21:15:14.524038   55459 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-851055 && echo "test-preload-851055" | sudo tee /etc/hostname
	I0612 21:15:14.650343   55459 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-851055
	
	I0612 21:15:14.650370   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:14.653542   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.653932   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:14.653968   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.654173   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHPort
	I0612 21:15:14.654370   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:14.654516   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:14.654625   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHUsername
	I0612 21:15:14.654776   55459 main.go:141] libmachine: Using SSH client type: native
	I0612 21:15:14.654975   55459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0612 21:15:14.655001   55459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-851055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-851055/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-851055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:15:14.776490   55459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:15:14.776530   55459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:15:14.776560   55459 buildroot.go:174] setting up certificates
	I0612 21:15:14.776573   55459 provision.go:84] configureAuth start
	I0612 21:15:14.776584   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetMachineName
	I0612 21:15:14.776879   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetIP
	I0612 21:15:14.779274   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.779594   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:14.779627   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.779730   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:14.781599   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.781922   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:14.781949   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.782066   55459 provision.go:143] copyHostCerts
	I0612 21:15:14.782133   55459 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:15:14.782143   55459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:15:14.782202   55459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:15:14.782285   55459 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:15:14.782292   55459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:15:14.782316   55459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:15:14.782367   55459 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:15:14.782373   55459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:15:14.782392   55459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:15:14.782437   55459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.test-preload-851055 san=[127.0.0.1 192.168.39.247 localhost minikube test-preload-851055]
	I0612 21:15:14.872671   55459 provision.go:177] copyRemoteCerts
	I0612 21:15:14.872728   55459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:15:14.872757   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:14.875581   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.875959   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:14.875992   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:14.876183   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHPort
	I0612 21:15:14.876396   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:14.876542   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHUsername
	I0612 21:15:14.876677   55459 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/test-preload-851055/id_rsa Username:docker}
	I0612 21:15:14.963033   55459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:15:14.988366   55459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0612 21:15:15.013280   55459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:15:15.038101   55459 provision.go:87] duration metric: took 261.516936ms to configureAuth
	I0612 21:15:15.038129   55459 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:15:15.038330   55459 config.go:182] Loaded profile config "test-preload-851055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0612 21:15:15.038426   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:15.041030   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:15.041396   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:15.041424   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:15.041679   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHPort
	I0612 21:15:15.041887   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:15.042055   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:15.042187   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHUsername
	I0612 21:15:15.042393   55459 main.go:141] libmachine: Using SSH client type: native
	I0612 21:15:15.042589   55459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0612 21:15:15.042614   55459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:15:15.220781   55459 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0612 21:15:15.220805   55459 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0612 21:15:15.220813   55459 machine.go:97] duration metric: took 817.549148ms to provisionDockerMachine
	I0612 21:15:15.220841   55459 fix.go:56] duration metric: took 19.629293588s for fixHost
	I0612 21:15:15.220852   55459 start.go:83] releasing machines lock for "test-preload-851055", held for 19.629318957s
	W0612 21:15:15.220882   55459 start.go:713] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	W0612 21:15:15.220965   55459 out.go:239] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0612 21:15:15.220981   55459 start.go:728] Will try again in 5 seconds ...
	I0612 21:15:20.223298   55459 start.go:360] acquireMachinesLock for test-preload-851055: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:15:20.223389   55459 start.go:364] duration metric: took 54.218µs to acquireMachinesLock for "test-preload-851055"
	I0612 21:15:20.223407   55459 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:15:20.223412   55459 fix.go:54] fixHost starting: 
	I0612 21:15:20.223685   55459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:15:20.223720   55459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:15:20.237912   55459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36149
	I0612 21:15:20.238393   55459 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:15:20.238874   55459 main.go:141] libmachine: Using API Version  1
	I0612 21:15:20.238899   55459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:15:20.239235   55459 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:15:20.239425   55459 main.go:141] libmachine: (test-preload-851055) Calling .DriverName
	I0612 21:15:20.239568   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetState
	I0612 21:15:20.241197   55459 fix.go:112] recreateIfNeeded on test-preload-851055: state=Running err=<nil>
	W0612 21:15:20.241220   55459 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:15:20.243347   55459 out.go:177] * Updating the running kvm2 "test-preload-851055" VM ...
	I0612 21:15:20.244945   55459 machine.go:94] provisionDockerMachine start ...
	I0612 21:15:20.244970   55459 main.go:141] libmachine: (test-preload-851055) Calling .DriverName
	I0612 21:15:20.245294   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:20.247542   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:20.247962   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:20.247992   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:20.248160   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHPort
	I0612 21:15:20.248330   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:20.248481   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:20.248590   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHUsername
	I0612 21:15:20.248734   55459 main.go:141] libmachine: Using SSH client type: native
	I0612 21:15:20.248905   55459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0612 21:15:20.248919   55459 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:15:20.363733   55459 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-851055
	
	I0612 21:15:20.363769   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetMachineName
	I0612 21:15:20.364030   55459 buildroot.go:166] provisioning hostname "test-preload-851055"
	I0612 21:15:20.364062   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetMachineName
	I0612 21:15:20.364240   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:20.366877   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:20.367287   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:20.367316   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:20.367431   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHPort
	I0612 21:15:20.367591   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:20.367747   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:20.367858   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHUsername
	I0612 21:15:20.368026   55459 main.go:141] libmachine: Using SSH client type: native
	I0612 21:15:20.368193   55459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0612 21:15:20.368205   55459 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-851055 && echo "test-preload-851055" | sudo tee /etc/hostname
	I0612 21:15:20.492627   55459 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-851055
	
	I0612 21:15:20.492653   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:20.494999   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:20.495307   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:20.495336   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:20.495494   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHPort
	I0612 21:15:20.495690   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:20.495826   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:20.495941   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHUsername
	I0612 21:15:20.496116   55459 main.go:141] libmachine: Using SSH client type: native
	I0612 21:15:20.496303   55459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0612 21:15:20.496321   55459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-851055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-851055/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-851055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:15:20.608013   55459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:15:20.608051   55459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:15:20.608073   55459 buildroot.go:174] setting up certificates
	I0612 21:15:20.608082   55459 provision.go:84] configureAuth start
	I0612 21:15:20.608090   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetMachineName
	I0612 21:15:20.608356   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetIP
	I0612 21:15:20.611719   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:20.612015   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:20.612034   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:20.612209   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:20.614642   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:20.615120   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:20.615146   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:20.615439   55459 provision.go:143] copyHostCerts
	I0612 21:15:20.615489   55459 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:15:20.615499   55459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:15:20.615556   55459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:15:20.615639   55459 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:15:20.615648   55459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:15:20.615666   55459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:15:20.615713   55459 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:15:20.615720   55459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:15:20.615735   55459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:15:20.615779   55459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.test-preload-851055 san=[127.0.0.1 192.168.39.247 localhost minikube test-preload-851055]
	I0612 21:15:21.003124   55459 provision.go:177] copyRemoteCerts
	I0612 21:15:21.003195   55459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:15:21.003222   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:21.005917   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:21.006366   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:21.006393   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:21.006587   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHPort
	I0612 21:15:21.006790   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:21.006975   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHUsername
	I0612 21:15:21.007143   55459 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/test-preload-851055/id_rsa Username:docker}
	I0612 21:15:21.095050   55459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:15:21.118229   55459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0612 21:15:21.141063   55459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:15:21.164079   55459 provision.go:87] duration metric: took 555.9851ms to configureAuth
	I0612 21:15:21.164105   55459 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:15:21.164267   55459 config.go:182] Loaded profile config "test-preload-851055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0612 21:15:21.164340   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHHostname
	I0612 21:15:21.167295   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:21.167675   55459 main.go:141] libmachine: (test-preload-851055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:8f:9a", ip: ""} in network mk-test-preload-851055: {Iface:virbr1 ExpiryTime:2024-06-12 22:12:03 +0000 UTC Type:0 Mac:52:54:00:38:8f:9a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:test-preload-851055 Clientid:01:52:54:00:38:8f:9a}
	I0612 21:15:21.167697   55459 main.go:141] libmachine: (test-preload-851055) DBG | domain test-preload-851055 has defined IP address 192.168.39.247 and MAC address 52:54:00:38:8f:9a in network mk-test-preload-851055
	I0612 21:15:21.167862   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHPort
	I0612 21:15:21.168087   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:21.168255   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHKeyPath
	I0612 21:15:21.168406   55459 main.go:141] libmachine: (test-preload-851055) Calling .GetSSHUsername
	I0612 21:15:21.168551   55459 main.go:141] libmachine: Using SSH client type: native
	I0612 21:15:21.168719   55459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0612 21:15:21.168751   55459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:15:21.348840   55459 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0612 21:15:21.348863   55459 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0612 21:15:21.348870   55459 machine.go:97] duration metric: took 1.103911008s to provisionDockerMachine
	I0612 21:15:21.348893   55459 fix.go:56] duration metric: took 1.125482025s for fixHost
	I0612 21:15:21.348899   55459 start.go:83] releasing machines lock for "test-preload-851055", held for 1.125503631s
	W0612 21:15:21.348983   55459 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p test-preload-851055" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	* Failed to start kvm2 VM. Running "minikube delete -p test-preload-851055" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0612 21:15:21.351159   55459 out.go:177] 
	W0612 21:15:21.352753   55459 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0612 21:15:21.352799   55459 out.go:239] * 
	* 
	W0612 21:15:21.353702   55459 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:15:21.355291   55459 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:68: out/minikube-linux-amd64 start -p test-preload-851055 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-06-12 21:15:21.387006965 +0000 UTC m=+3873.001457335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-851055 -n test-preload-851055
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-851055 -n test-preload-851055: exit status 6 (229.344106ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:15:21.599747   55683 status.go:417] kubeconfig endpoint: get endpoint: "test-preload-851055" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-851055" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-851055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-851055
--- FAIL: TestPreload (213.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (400.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-724108 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-724108 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m56.82037863s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-724108] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-724108" primary control-plane node in "kubernetes-upgrade-724108" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 21:17:18.190021   56731 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:17:18.190317   56731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:17:18.190330   56731 out.go:304] Setting ErrFile to fd 2...
	I0612 21:17:18.190337   56731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:17:18.190533   56731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:17:18.191022   56731 out.go:298] Setting JSON to false
	I0612 21:17:18.191881   56731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7183,"bootTime":1718219855,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:17:18.191952   56731 start.go:139] virtualization: kvm guest
	I0612 21:17:18.193808   56731 out.go:177] * [kubernetes-upgrade-724108] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:17:18.196680   56731 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:17:18.195481   56731 notify.go:220] Checking for updates...
	I0612 21:17:18.200346   56731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:17:18.202785   56731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:17:18.204468   56731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:17:18.206919   56731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:17:18.208451   56731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:17:18.210038   56731 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:17:18.248035   56731 out.go:177] * Using the kvm2 driver based on user configuration
	I0612 21:17:18.249337   56731 start.go:297] selected driver: kvm2
	I0612 21:17:18.249353   56731 start.go:901] validating driver "kvm2" against <nil>
	I0612 21:17:18.249365   56731 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:17:18.250359   56731 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:17:18.262701   56731 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:17:18.279238   56731 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:17:18.279295   56731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 21:17:18.279542   56731 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0612 21:17:18.279569   56731 cni.go:84] Creating CNI manager for ""
	I0612 21:17:18.279579   56731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:17:18.279594   56731 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 21:17:18.279649   56731 start.go:340] cluster config:
	{Name:kubernetes-upgrade-724108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-724108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:17:18.279763   56731 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:17:18.281418   56731 out.go:177] * Starting "kubernetes-upgrade-724108" primary control-plane node in "kubernetes-upgrade-724108" cluster
	I0612 21:17:18.282677   56731 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:17:18.282718   56731 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0612 21:17:18.282728   56731 cache.go:56] Caching tarball of preloaded images
	I0612 21:17:18.282805   56731 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:17:18.282819   56731 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0612 21:17:18.283312   56731 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/config.json ...
	I0612 21:17:18.283344   56731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/config.json: {Name:mka7be7811c97ce6fda0102c5c2154fdbd9dff41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:17:18.283504   56731 start.go:360] acquireMachinesLock for kubernetes-upgrade-724108: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:17:45.834892   56731 start.go:364] duration metric: took 27.551337529s to acquireMachinesLock for "kubernetes-upgrade-724108"
	I0612 21:17:45.834978   56731 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-724108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-724108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:17:45.835119   56731 start.go:125] createHost starting for "" (driver="kvm2")
	I0612 21:17:45.837526   56731 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 21:17:45.837744   56731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:17:45.837802   56731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:17:45.855387   56731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45497
	I0612 21:17:45.855830   56731 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:17:45.856470   56731 main.go:141] libmachine: Using API Version  1
	I0612 21:17:45.856492   56731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:17:45.856842   56731 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:17:45.857058   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetMachineName
	I0612 21:17:45.857273   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .DriverName
	I0612 21:17:45.857446   56731 start.go:159] libmachine.API.Create for "kubernetes-upgrade-724108" (driver="kvm2")
	I0612 21:17:45.857504   56731 client.go:168] LocalClient.Create starting
	I0612 21:17:45.857539   56731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 21:17:45.857572   56731 main.go:141] libmachine: Decoding PEM data...
	I0612 21:17:45.857594   56731 main.go:141] libmachine: Parsing certificate...
	I0612 21:17:45.857664   56731 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 21:17:45.857687   56731 main.go:141] libmachine: Decoding PEM data...
	I0612 21:17:45.857702   56731 main.go:141] libmachine: Parsing certificate...
	I0612 21:17:45.857725   56731 main.go:141] libmachine: Running pre-create checks...
	I0612 21:17:45.857742   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .PreCreateCheck
	I0612 21:17:45.858134   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetConfigRaw
	I0612 21:17:45.858544   56731 main.go:141] libmachine: Creating machine...
	I0612 21:17:45.858558   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .Create
	I0612 21:17:45.858687   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Creating KVM machine...
	I0612 21:17:45.859836   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found existing default KVM network
	I0612 21:17:45.860806   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:45.860630   57490 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4a:d1:bf} reservation:<nil>}
	I0612 21:17:45.861569   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:45.861466   57490 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000252330}
	I0612 21:17:45.861594   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | created network xml: 
	I0612 21:17:45.861607   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | <network>
	I0612 21:17:45.861619   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG |   <name>mk-kubernetes-upgrade-724108</name>
	I0612 21:17:45.861632   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG |   <dns enable='no'/>
	I0612 21:17:45.861642   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG |   
	I0612 21:17:45.861654   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0612 21:17:45.861662   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG |     <dhcp>
	I0612 21:17:45.861673   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0612 21:17:45.861681   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG |     </dhcp>
	I0612 21:17:45.861690   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG |   </ip>
	I0612 21:17:45.861697   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG |   
	I0612 21:17:45.861706   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | </network>
	I0612 21:17:45.861720   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | 
	I0612 21:17:45.867705   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | trying to create private KVM network mk-kubernetes-upgrade-724108 192.168.50.0/24...
	I0612 21:17:45.944938   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | private KVM network mk-kubernetes-upgrade-724108 192.168.50.0/24 created
	I0612 21:17:45.944986   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108 ...
	I0612 21:17:45.945002   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:45.944904   57490 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:17:45.945024   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 21:17:45.945055   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 21:17:46.193525   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:46.193388   57490 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/id_rsa...
	I0612 21:17:46.375743   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:46.375610   57490 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/kubernetes-upgrade-724108.rawdisk...
	I0612 21:17:46.375780   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Writing magic tar header
	I0612 21:17:46.375800   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Writing SSH key tar header
	I0612 21:17:46.375814   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:46.375759   57490 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108 ...
	I0612 21:17:46.375956   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108
	I0612 21:17:46.375986   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 21:17:46.376001   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108 (perms=drwx------)
	I0612 21:17:46.376024   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 21:17:46.376037   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 21:17:46.376051   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:17:46.376062   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 21:17:46.376077   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 21:17:46.376102   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 21:17:46.376124   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 21:17:46.376135   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Checking permissions on dir: /home/jenkins
	I0612 21:17:46.376144   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Checking permissions on dir: /home
	I0612 21:17:46.376153   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Skipping /home - not owner
	I0612 21:17:46.376178   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 21:17:46.376198   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Creating domain...
	I0612 21:17:46.377330   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) define libvirt domain using xml: 
	I0612 21:17:46.377355   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) <domain type='kvm'>
	I0612 21:17:46.377366   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   <name>kubernetes-upgrade-724108</name>
	I0612 21:17:46.377375   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   <memory unit='MiB'>2200</memory>
	I0612 21:17:46.377388   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   <vcpu>2</vcpu>
	I0612 21:17:46.377396   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   <features>
	I0612 21:17:46.377406   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <acpi/>
	I0612 21:17:46.377412   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <apic/>
	I0612 21:17:46.377423   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <pae/>
	I0612 21:17:46.377430   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     
	I0612 21:17:46.377455   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   </features>
	I0612 21:17:46.377471   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   <cpu mode='host-passthrough'>
	I0612 21:17:46.377480   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   
	I0612 21:17:46.377488   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   </cpu>
	I0612 21:17:46.377497   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   <os>
	I0612 21:17:46.377505   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <type>hvm</type>
	I0612 21:17:46.377515   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <boot dev='cdrom'/>
	I0612 21:17:46.377522   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <boot dev='hd'/>
	I0612 21:17:46.377532   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <bootmenu enable='no'/>
	I0612 21:17:46.377539   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   </os>
	I0612 21:17:46.377548   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   <devices>
	I0612 21:17:46.377561   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <disk type='file' device='cdrom'>
	I0612 21:17:46.377576   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/boot2docker.iso'/>
	I0612 21:17:46.377585   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <target dev='hdc' bus='scsi'/>
	I0612 21:17:46.377593   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <readonly/>
	I0612 21:17:46.377601   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     </disk>
	I0612 21:17:46.377610   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <disk type='file' device='disk'>
	I0612 21:17:46.377620   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 21:17:46.377636   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/kubernetes-upgrade-724108.rawdisk'/>
	I0612 21:17:46.377650   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <target dev='hda' bus='virtio'/>
	I0612 21:17:46.377660   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     </disk>
	I0612 21:17:46.377668   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <interface type='network'>
	I0612 21:17:46.377679   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <source network='mk-kubernetes-upgrade-724108'/>
	I0612 21:17:46.377687   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <model type='virtio'/>
	I0612 21:17:46.377697   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     </interface>
	I0612 21:17:46.377705   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <interface type='network'>
	I0612 21:17:46.377715   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <source network='default'/>
	I0612 21:17:46.377728   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <model type='virtio'/>
	I0612 21:17:46.377737   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     </interface>
	I0612 21:17:46.377744   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <serial type='pty'>
	I0612 21:17:46.377755   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <target port='0'/>
	I0612 21:17:46.377762   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     </serial>
	I0612 21:17:46.377770   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <console type='pty'>
	I0612 21:17:46.377778   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <target type='serial' port='0'/>
	I0612 21:17:46.377786   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     </console>
	I0612 21:17:46.377794   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     <rng model='virtio'>
	I0612 21:17:46.377817   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)       <backend model='random'>/dev/random</backend>
	I0612 21:17:46.377836   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     </rng>
	I0612 21:17:46.377851   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     
	I0612 21:17:46.377857   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)     
	I0612 21:17:46.377864   56731 main.go:141] libmachine: (kubernetes-upgrade-724108)   </devices>
	I0612 21:17:46.377869   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) </domain>
	I0612 21:17:46.377878   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) 
	I0612 21:17:46.383720   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:fd:82:55 in network default
	I0612 21:17:46.384285   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Ensuring networks are active...
	I0612 21:17:46.384315   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:46.385121   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Ensuring network default is active
	I0612 21:17:46.385680   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Ensuring network mk-kubernetes-upgrade-724108 is active
	I0612 21:17:46.386365   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Getting domain xml...
	I0612 21:17:46.387351   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Creating domain...
	I0612 21:17:47.702314   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Waiting to get IP...
	I0612 21:17:47.703342   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:47.704031   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:47.704119   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:47.704043   57490 retry.go:31] will retry after 199.44643ms: waiting for machine to come up
	I0612 21:17:47.905525   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:47.906059   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:47.906092   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:47.906004   57490 retry.go:31] will retry after 334.209438ms: waiting for machine to come up
	I0612 21:17:48.241752   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:48.242255   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:48.242282   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:48.242215   57490 retry.go:31] will retry after 447.241796ms: waiting for machine to come up
	I0612 21:17:48.690600   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:48.691054   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:48.691084   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:48.691017   57490 retry.go:31] will retry after 507.09421ms: waiting for machine to come up
	I0612 21:17:49.199431   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:49.199898   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:49.199929   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:49.199853   57490 retry.go:31] will retry after 468.519568ms: waiting for machine to come up
	I0612 21:17:49.669689   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:49.670262   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:49.670294   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:49.670209   57490 retry.go:31] will retry after 669.577518ms: waiting for machine to come up
	I0612 21:17:50.341331   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:50.341874   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:50.341901   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:50.341809   57490 retry.go:31] will retry after 757.344548ms: waiting for machine to come up
	I0612 21:17:51.101357   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:51.101862   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:51.101888   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:51.101770   57490 retry.go:31] will retry after 986.653515ms: waiting for machine to come up
	I0612 21:17:52.089862   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:52.090306   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:52.090332   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:52.090284   57490 retry.go:31] will retry after 1.59233571s: waiting for machine to come up
	I0612 21:17:53.684848   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:53.685302   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:53.685326   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:53.685234   57490 retry.go:31] will retry after 1.428138025s: waiting for machine to come up
	I0612 21:17:55.115948   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:55.116487   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:55.116513   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:55.116432   57490 retry.go:31] will retry after 2.162825486s: waiting for machine to come up
	I0612 21:17:57.281491   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:57.281979   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:57.282013   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:57.281956   57490 retry.go:31] will retry after 2.475675362s: waiting for machine to come up
	I0612 21:17:59.760271   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:17:59.760712   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:17:59.760737   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:17:59.760669   57490 retry.go:31] will retry after 3.228011667s: waiting for machine to come up
	I0612 21:18:02.989886   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:02.990513   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find current IP address of domain kubernetes-upgrade-724108 in network mk-kubernetes-upgrade-724108
	I0612 21:18:02.990542   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | I0612 21:18:02.990460   57490 retry.go:31] will retry after 4.334221641s: waiting for machine to come up
	I0612 21:18:07.326349   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.326754   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Found IP for machine: 192.168.50.31
	I0612 21:18:07.326776   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Reserving static IP address...
	I0612 21:18:07.326810   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has current primary IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.327276   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-724108", mac: "52:54:00:f7:aa:0a", ip: "192.168.50.31"} in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.401521   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Reserved static IP address: 192.168.50.31
	I0612 21:18:07.401549   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Getting to WaitForSSH function...
	I0612 21:18:07.401559   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Waiting for SSH to be available...
	I0612 21:18:07.404570   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.405030   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:07.405053   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.405223   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Using SSH client type: external
	I0612 21:18:07.405253   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/id_rsa (-rw-------)
	I0612 21:18:07.405307   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:18:07.405329   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | About to run SSH command:
	I0612 21:18:07.405346   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | exit 0
	I0612 21:18:07.527485   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | SSH cmd err, output: <nil>: 
	I0612 21:18:07.527704   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) KVM machine creation complete!
	I0612 21:18:07.527965   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetConfigRaw
	I0612 21:18:07.528557   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .DriverName
	I0612 21:18:07.528797   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .DriverName
	I0612 21:18:07.529016   56731 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 21:18:07.529032   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetState
	I0612 21:18:07.530712   56731 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 21:18:07.530730   56731 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 21:18:07.530739   56731 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 21:18:07.530748   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:18:07.533398   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.533835   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:07.533864   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.533959   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:18:07.534171   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:07.534317   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:07.534475   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:18:07.534653   56731 main.go:141] libmachine: Using SSH client type: native
	I0612 21:18:07.534896   56731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0612 21:18:07.534910   56731 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 21:18:07.634749   56731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:18:07.634774   56731 main.go:141] libmachine: Detecting the provisioner...
	I0612 21:18:07.634785   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:18:07.637906   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.638313   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:07.638336   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.638512   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:18:07.638729   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:07.638911   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:07.639061   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:18:07.639259   56731 main.go:141] libmachine: Using SSH client type: native
	I0612 21:18:07.639447   56731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0612 21:18:07.639459   56731 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 21:18:07.740049   56731 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 21:18:07.740152   56731 main.go:141] libmachine: found compatible host: buildroot
	I0612 21:18:07.740166   56731 main.go:141] libmachine: Provisioning with buildroot...
	I0612 21:18:07.740174   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetMachineName
	I0612 21:18:07.740453   56731 buildroot.go:166] provisioning hostname "kubernetes-upgrade-724108"
	I0612 21:18:07.740485   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetMachineName
	I0612 21:18:07.740660   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:18:07.743233   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.743557   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:07.743574   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.743696   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:18:07.743871   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:07.744049   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:07.744184   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:18:07.744382   56731 main.go:141] libmachine: Using SSH client type: native
	I0612 21:18:07.744538   56731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0612 21:18:07.744550   56731 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-724108 && echo "kubernetes-upgrade-724108" | sudo tee /etc/hostname
	I0612 21:18:07.857864   56731 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-724108
	
	I0612 21:18:07.857899   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:18:07.860595   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.860957   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:07.860997   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.861196   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:18:07.861372   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:07.861550   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:07.861742   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:18:07.861905   56731 main.go:141] libmachine: Using SSH client type: native
	I0612 21:18:07.862072   56731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0612 21:18:07.862092   56731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-724108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-724108/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-724108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:18:07.967708   56731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:18:07.967737   56731 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:18:07.967770   56731 buildroot.go:174] setting up certificates
	I0612 21:18:07.967787   56731 provision.go:84] configureAuth start
	I0612 21:18:07.967796   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetMachineName
	I0612 21:18:07.968203   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetIP
	I0612 21:18:07.971142   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.971581   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:07.971612   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.971773   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:18:07.974103   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.974366   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:07.974407   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:07.974517   56731 provision.go:143] copyHostCerts
	I0612 21:18:07.974573   56731 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:18:07.974583   56731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:18:07.974633   56731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:18:07.974724   56731 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:18:07.974731   56731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:18:07.974748   56731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:18:07.974820   56731 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:18:07.974827   56731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:18:07.974843   56731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:18:07.974884   56731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-724108 san=[127.0.0.1 192.168.50.31 kubernetes-upgrade-724108 localhost minikube]
	I0612 21:18:08.174552   56731 provision.go:177] copyRemoteCerts
	I0612 21:18:08.174612   56731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:18:08.174657   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:18:08.177493   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.177813   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:08.177839   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.178049   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:18:08.178227   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:08.178389   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:18:08.178565   56731 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/id_rsa Username:docker}
	I0612 21:18:08.259686   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:18:08.285410   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0612 21:18:08.310277   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:18:08.335214   56731 provision.go:87] duration metric: took 367.409989ms to configureAuth
	I0612 21:18:08.335253   56731 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:18:08.335430   56731 config.go:182] Loaded profile config "kubernetes-upgrade-724108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:18:08.335534   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:18:08.338568   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.338882   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:08.338923   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.339142   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:18:08.339346   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:08.339539   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:08.339706   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:18:08.339933   56731 main.go:141] libmachine: Using SSH client type: native
	I0612 21:18:08.340179   56731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0612 21:18:08.340207   56731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:18:08.618739   56731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:18:08.618762   56731 main.go:141] libmachine: Checking connection to Docker...
	I0612 21:18:08.618772   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetURL
	I0612 21:18:08.619972   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Using libvirt version 6000000
	I0612 21:18:08.622311   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.622692   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:08.622718   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.622885   56731 main.go:141] libmachine: Docker is up and running!
	I0612 21:18:08.622908   56731 main.go:141] libmachine: Reticulating splines...
	I0612 21:18:08.622915   56731 client.go:171] duration metric: took 22.765400497s to LocalClient.Create
	I0612 21:18:08.622938   56731 start.go:167] duration metric: took 22.765493441s to libmachine.API.Create "kubernetes-upgrade-724108"
	I0612 21:18:08.622951   56731 start.go:293] postStartSetup for "kubernetes-upgrade-724108" (driver="kvm2")
	I0612 21:18:08.622963   56731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:18:08.622988   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .DriverName
	I0612 21:18:08.623236   56731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:18:08.623261   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:18:08.625480   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.625778   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:08.625808   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.625905   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:18:08.626086   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:08.626267   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:18:08.626403   56731 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/id_rsa Username:docker}
	I0612 21:18:08.710490   56731 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:18:08.714913   56731 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:18:08.714943   56731 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:18:08.715049   56731 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:18:08.715165   56731 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:18:08.715352   56731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:18:08.724784   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:18:08.751899   56731 start.go:296] duration metric: took 128.934554ms for postStartSetup
	I0612 21:18:08.751955   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetConfigRaw
	I0612 21:18:08.752580   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetIP
	I0612 21:18:08.755423   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.755806   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:08.755833   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.756012   56731 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/config.json ...
	I0612 21:18:08.756192   56731 start.go:128] duration metric: took 22.921056743s to createHost
	I0612 21:18:08.756217   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:18:08.758369   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.758662   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:08.758684   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.758996   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:18:08.759220   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:08.759392   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:08.759561   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:18:08.759725   56731 main.go:141] libmachine: Using SSH client type: native
	I0612 21:18:08.759873   56731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0612 21:18:08.759882   56731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0612 21:18:08.860288   56731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718227088.830989324
	
	I0612 21:18:08.860314   56731 fix.go:216] guest clock: 1718227088.830989324
	I0612 21:18:08.860324   56731 fix.go:229] Guest: 2024-06-12 21:18:08.830989324 +0000 UTC Remote: 2024-06-12 21:18:08.756203874 +0000 UTC m=+50.619119364 (delta=74.78545ms)
	I0612 21:18:08.860371   56731 fix.go:200] guest clock delta is within tolerance: 74.78545ms
	I0612 21:18:08.860377   56731 start.go:83] releasing machines lock for "kubernetes-upgrade-724108", held for 23.02545401s
	I0612 21:18:08.860410   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .DriverName
	I0612 21:18:08.860661   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetIP
	I0612 21:18:08.863629   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.864061   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:08.864088   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.864257   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .DriverName
	I0612 21:18:08.864715   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .DriverName
	I0612 21:18:08.864890   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .DriverName
	I0612 21:18:08.864943   56731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:18:08.864986   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:18:08.865108   56731 ssh_runner.go:195] Run: cat /version.json
	I0612 21:18:08.865133   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:18:08.867869   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.868029   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.868304   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:08.868332   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.868483   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:08.868489   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:18:08.868506   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:08.868661   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:08.868819   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:18:08.868886   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:18:08.869000   56731 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/id_rsa Username:docker}
	I0612 21:18:08.869055   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:18:08.869204   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:18:08.869378   56731 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/id_rsa Username:docker}
	I0612 21:18:08.965645   56731 ssh_runner.go:195] Run: systemctl --version
	I0612 21:18:08.972209   56731 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:18:09.131214   56731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:18:09.137847   56731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:18:09.137923   56731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:18:09.155566   56731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:18:09.155592   56731 start.go:494] detecting cgroup driver to use...
	I0612 21:18:09.155671   56731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:18:09.177694   56731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:18:09.192677   56731 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:18:09.192735   56731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:18:09.210723   56731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:18:09.228858   56731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:18:09.352769   56731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:18:09.514681   56731 docker.go:233] disabling docker service ...
	I0612 21:18:09.514748   56731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:18:09.529693   56731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:18:09.543806   56731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:18:09.680612   56731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:18:09.822141   56731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:18:09.840033   56731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:18:09.859389   56731 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0612 21:18:09.859459   56731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:18:09.870337   56731 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:18:09.870404   56731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:18:09.881295   56731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:18:09.892124   56731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:18:09.902688   56731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:18:09.913638   56731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:18:09.923490   56731 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:18:09.923545   56731 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:18:09.936737   56731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:18:09.946136   56731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:18:10.066816   56731 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:18:10.234770   56731 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:18:10.234854   56731 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:18:10.240192   56731 start.go:562] Will wait 60s for crictl version
	I0612 21:18:10.240260   56731 ssh_runner.go:195] Run: which crictl
	I0612 21:18:10.245262   56731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:18:10.299315   56731 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:18:10.299407   56731 ssh_runner.go:195] Run: crio --version
	I0612 21:18:10.329609   56731 ssh_runner.go:195] Run: crio --version
	I0612 21:18:10.362817   56731 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0612 21:18:10.364093   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetIP
	I0612 21:18:10.367232   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:10.367685   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:18:10.367720   56731 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:18:10.368192   56731 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0612 21:18:10.373162   56731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:18:10.386552   56731 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-724108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-724108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:18:10.386680   56731 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:18:10.386738   56731 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:18:10.423248   56731 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:18:10.423309   56731 ssh_runner.go:195] Run: which lz4
	I0612 21:18:10.427960   56731 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0612 21:18:10.433164   56731 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:18:10.433212   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0612 21:18:12.247028   56731 crio.go:462] duration metric: took 1.819110534s to copy over tarball
	I0612 21:18:12.247113   56731 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:18:14.960579   56731 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.713431219s)
	I0612 21:18:14.960612   56731 crio.go:469] duration metric: took 2.713549564s to extract the tarball
	I0612 21:18:14.960623   56731 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:18:15.006642   56731 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:18:15.062424   56731 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:18:15.062473   56731 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:18:15.062576   56731 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:18:15.062590   56731 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:18:15.062610   56731 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0612 21:18:15.062640   56731 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:18:15.062664   56731 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:18:15.062701   56731 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0612 21:18:15.062708   56731 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:18:15.062590   56731 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:18:15.063966   56731 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:18:15.064206   56731 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0612 21:18:15.064249   56731 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:18:15.064262   56731 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:18:15.064206   56731 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0612 21:18:15.064362   56731 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:18:15.064370   56731 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:18:15.064403   56731 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:18:15.223842   56731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0612 21:18:15.239961   56731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:18:15.288366   56731 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0612 21:18:15.288420   56731 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0612 21:18:15.288459   56731 ssh_runner.go:195] Run: which crictl
	I0612 21:18:15.303800   56731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:18:15.310507   56731 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0612 21:18:15.310555   56731 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:18:15.310594   56731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0612 21:18:15.310598   56731 ssh_runner.go:195] Run: which crictl
	I0612 21:18:15.362120   56731 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0612 21:18:15.362155   56731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:18:15.362170   56731 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:18:15.362218   56731 ssh_runner.go:195] Run: which crictl
	I0612 21:18:15.374820   56731 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0612 21:18:15.406366   56731 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0612 21:18:15.406368   56731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:18:15.440410   56731 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0612 21:18:15.443625   56731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:18:15.461484   56731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0612 21:18:15.464476   56731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:18:15.489298   56731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0612 21:18:15.507818   56731 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0612 21:18:15.507864   56731 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:18:15.507911   56731 ssh_runner.go:195] Run: which crictl
	I0612 21:18:15.522826   56731 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0612 21:18:15.522871   56731 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0612 21:18:15.522962   56731 ssh_runner.go:195] Run: which crictl
	I0612 21:18:15.560152   56731 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0612 21:18:15.560197   56731 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:18:15.560240   56731 ssh_runner.go:195] Run: which crictl
	I0612 21:18:15.578592   56731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:18:15.578664   56731 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0612 21:18:15.578632   56731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:18:15.578712   56731 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:18:15.578631   56731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0612 21:18:15.578756   56731 ssh_runner.go:195] Run: which crictl
	I0612 21:18:15.583032   56731 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0612 21:18:15.675226   56731 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0612 21:18:15.681520   56731 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0612 21:18:15.681564   56731 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0612 21:18:15.681574   56731 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0612 21:18:16.022097   56731 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:18:16.166837   56731 cache_images.go:92] duration metric: took 1.104319691s to LoadCachedImages
	W0612 21:18:16.166953   56731 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0612 21:18:16.166971   56731 kubeadm.go:928] updating node { 192.168.50.31 8443 v1.20.0 crio true true} ...
	I0612 21:18:16.167149   56731 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-724108 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-724108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:18:16.167266   56731 ssh_runner.go:195] Run: crio config
	I0612 21:18:16.225115   56731 cni.go:84] Creating CNI manager for ""
	I0612 21:18:16.225139   56731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:18:16.225153   56731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:18:16.225172   56731 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.31 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-724108 NodeName:kubernetes-upgrade-724108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0612 21:18:16.225367   56731 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-724108"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:18:16.225443   56731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0612 21:18:16.236499   56731 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:18:16.236596   56731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:18:16.246720   56731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0612 21:18:16.267019   56731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:18:16.284996   56731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0612 21:18:16.305060   56731 ssh_runner.go:195] Run: grep 192.168.50.31	control-plane.minikube.internal$ /etc/hosts
	I0612 21:18:16.309423   56731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:18:16.327568   56731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:18:16.449063   56731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:18:16.467702   56731 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108 for IP: 192.168.50.31
	I0612 21:18:16.467740   56731 certs.go:194] generating shared ca certs ...
	I0612 21:18:16.467768   56731 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:18:16.467946   56731 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:18:16.467999   56731 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:18:16.468014   56731 certs.go:256] generating profile certs ...
	I0612 21:18:16.468098   56731 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/client.key
	I0612 21:18:16.468115   56731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/client.crt with IP's: []
	I0612 21:18:16.575237   56731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/client.crt ...
	I0612 21:18:16.575271   56731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/client.crt: {Name:mk0cf486a68d49ba89d42a04b4cbc92056239527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:18:16.575462   56731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/client.key ...
	I0612 21:18:16.575481   56731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/client.key: {Name:mkdd4889a31da122ade3cd86ef47dcf9fdccb975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:18:16.575595   56731 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.key.a9f580b9
	I0612 21:18:16.575614   56731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.crt.a9f580b9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.31]
	I0612 21:18:17.029479   56731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.crt.a9f580b9 ...
	I0612 21:18:17.029508   56731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.crt.a9f580b9: {Name:mk89183e640fafb8ab09d1f880b480bfbe145195 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:18:17.029668   56731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.key.a9f580b9 ...
	I0612 21:18:17.029685   56731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.key.a9f580b9: {Name:mkdedf7baf6073d19ab0f9e639072edcfb55c64e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:18:17.029760   56731 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.crt.a9f580b9 -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.crt
	I0612 21:18:17.029842   56731 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.key.a9f580b9 -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.key
	I0612 21:18:17.029913   56731 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/proxy-client.key
	I0612 21:18:17.029930   56731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/proxy-client.crt with IP's: []
	I0612 21:18:17.119117   56731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/proxy-client.crt ...
	I0612 21:18:17.119148   56731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/proxy-client.crt: {Name:mk72f52f2def88c435056d2d465c8516b128248b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:18:17.119319   56731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/proxy-client.key ...
	I0612 21:18:17.119333   56731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/proxy-client.key: {Name:mkbc9da9c025425a13bbb15558bae8e95cb95e3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:18:17.119500   56731 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:18:17.119538   56731 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:18:17.119548   56731 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:18:17.119569   56731 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:18:17.119592   56731 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:18:17.119612   56731 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:18:17.119647   56731 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:18:17.120209   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:18:17.154878   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:18:17.189071   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:18:17.215706   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:18:17.243606   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0612 21:18:17.309307   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:18:17.334772   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:18:17.364493   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:18:17.445057   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:18:17.474511   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:18:17.511239   56731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:18:17.546197   56731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:18:17.565203   56731 ssh_runner.go:195] Run: openssl version
	I0612 21:18:17.571979   56731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:18:17.584443   56731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:18:17.590303   56731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:18:17.590368   56731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:18:17.597839   56731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:18:17.610974   56731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:18:17.623845   56731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:18:17.628847   56731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:18:17.628901   56731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:18:17.634973   56731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:18:17.647238   56731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:18:17.658610   56731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:18:17.663739   56731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:18:17.663800   56731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:18:17.669801   56731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:18:17.681544   56731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:18:17.686234   56731 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 21:18:17.686311   56731 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-724108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-724108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:18:17.686401   56731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:18:17.686476   56731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:18:17.733161   56731 cri.go:89] found id: ""
	I0612 21:18:17.733252   56731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 21:18:17.745494   56731 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:18:17.757656   56731 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:18:17.769293   56731 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:18:17.769319   56731 kubeadm.go:156] found existing configuration files:
	
	I0612 21:18:17.769373   56731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:18:17.780205   56731 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:18:17.780280   56731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:18:17.791852   56731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:18:17.804150   56731 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:18:17.804223   56731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:18:17.815787   56731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:18:17.826219   56731 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:18:17.826293   56731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:18:17.837788   56731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:18:17.851364   56731 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:18:17.851425   56731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:18:17.866408   56731 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:18:18.039295   56731 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:18:18.039380   56731 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:18:18.221612   56731 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:18:18.221748   56731 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:18:18.221861   56731 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:18:18.455327   56731 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:18:18.457156   56731 out.go:204]   - Generating certificates and keys ...
	I0612 21:18:18.457268   56731 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:18:18.457348   56731 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:18:18.567329   56731 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 21:18:18.699292   56731 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 21:18:18.946956   56731 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 21:18:19.034535   56731 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 21:18:19.464621   56731 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 21:18:19.464787   56731 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-724108 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	I0612 21:18:19.745456   56731 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 21:18:19.745638   56731 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-724108 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	I0612 21:18:20.096182   56731 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 21:18:20.226167   56731 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 21:18:20.323579   56731 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 21:18:20.323939   56731 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:18:20.562940   56731 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:18:20.762534   56731 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:18:20.836996   56731 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:18:21.374190   56731 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:18:21.391562   56731 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:18:21.392821   56731 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:18:21.392891   56731 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:18:21.527103   56731 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:18:21.597583   56731 out.go:204]   - Booting up control plane ...
	I0612 21:18:21.597774   56731 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:18:21.597897   56731 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:18:21.598006   56731 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:18:21.598119   56731 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:18:21.598340   56731 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:19:01.542338   56731 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:19:01.542956   56731 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:19:01.543233   56731 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:19:06.543587   56731 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:19:06.543984   56731 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:19:16.542763   56731 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:19:16.543012   56731 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:19:36.542686   56731 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:19:36.542918   56731 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:20:16.545159   56731 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:20:16.545681   56731 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:20:16.545697   56731 kubeadm.go:309] 
	I0612 21:20:16.545798   56731 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:20:16.545884   56731 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:20:16.545895   56731 kubeadm.go:309] 
	I0612 21:20:16.545981   56731 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:20:16.546065   56731 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:20:16.546325   56731 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:20:16.546343   56731 kubeadm.go:309] 
	I0612 21:20:16.546579   56731 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:20:16.546656   56731 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:20:16.546728   56731 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:20:16.546735   56731 kubeadm.go:309] 
	I0612 21:20:16.547023   56731 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:20:16.547298   56731 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:20:16.547353   56731 kubeadm.go:309] 
	I0612 21:20:16.547652   56731 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:20:16.548013   56731 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:20:16.548567   56731 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:20:16.548667   56731 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:20:16.548694   56731 kubeadm.go:309] 
	I0612 21:20:16.548798   56731 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:20:16.548919   56731 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:20:16.549044   56731 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0612 21:20:16.549134   56731 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-724108 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-724108 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-724108 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-724108 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0612 21:20:16.549180   56731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:20:17.561041   56731 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.011806621s)
	I0612 21:20:17.561121   56731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:20:17.577525   56731 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:20:17.587974   56731 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:20:17.587994   56731 kubeadm.go:156] found existing configuration files:
	
	I0612 21:20:17.588038   56731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:20:17.597412   56731 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:20:17.597466   56731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:20:17.607318   56731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:20:17.617192   56731 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:20:17.617252   56731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:20:17.630416   56731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:20:17.643603   56731 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:20:17.643642   56731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:20:17.654223   56731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:20:17.664554   56731 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:20:17.664604   56731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:20:17.674168   56731 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:20:17.748029   56731 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:20:17.748081   56731 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:20:17.917877   56731 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:20:17.918070   56731 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:20:17.918241   56731 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:20:18.133868   56731 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:20:18.135993   56731 out.go:204]   - Generating certificates and keys ...
	I0612 21:20:18.136104   56731 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:20:18.136156   56731 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:20:18.136271   56731 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:20:18.136379   56731 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:20:18.136471   56731 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:20:18.136560   56731 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:20:18.137035   56731 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:20:18.137564   56731 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:20:18.137956   56731 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:20:18.138599   56731 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:20:18.138717   56731 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:20:18.138811   56731 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:20:18.203041   56731 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:20:18.602576   56731 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:20:18.700436   56731 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:20:19.002643   56731 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:20:19.023627   56731 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:20:19.025721   56731 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:20:19.025771   56731 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:20:19.197350   56731 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:20:19.198978   56731 out.go:204]   - Booting up control plane ...
	I0612 21:20:19.199120   56731 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:20:19.201994   56731 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:20:19.203488   56731 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:20:19.204688   56731 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:20:19.207800   56731 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:20:59.210973   56731 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:20:59.211218   56731 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:20:59.211543   56731 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:21:04.212305   56731 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:21:04.212582   56731 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:21:14.213694   56731 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:21:14.213994   56731 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:21:34.212701   56731 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:21:34.212975   56731 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:22:14.212864   56731 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:22:14.213151   56731 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:22:14.213166   56731 kubeadm.go:309] 
	I0612 21:22:14.213224   56731 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:22:14.213308   56731 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:22:14.213356   56731 kubeadm.go:309] 
	I0612 21:22:14.213561   56731 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:22:14.213643   56731 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:22:14.213792   56731 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:22:14.213810   56731 kubeadm.go:309] 
	I0612 21:22:14.213954   56731 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:22:14.214010   56731 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:22:14.214080   56731 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:22:14.214101   56731 kubeadm.go:309] 
	I0612 21:22:14.214246   56731 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:22:14.214388   56731 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:22:14.214401   56731 kubeadm.go:309] 
	I0612 21:22:14.214581   56731 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:22:14.214729   56731 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:22:14.214839   56731 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:22:14.214946   56731 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:22:14.214957   56731 kubeadm.go:309] 
	I0612 21:22:14.215314   56731 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:22:14.215410   56731 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:22:14.215493   56731 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:22:14.215569   56731 kubeadm.go:393] duration metric: took 3m56.529261423s to StartCluster
	I0612 21:22:14.215648   56731 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:22:14.215737   56731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:22:14.269977   56731 cri.go:89] found id: ""
	I0612 21:22:14.270010   56731 logs.go:276] 0 containers: []
	W0612 21:22:14.270021   56731 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:22:14.270028   56731 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:22:14.270096   56731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:22:14.312051   56731 cri.go:89] found id: ""
	I0612 21:22:14.312084   56731 logs.go:276] 0 containers: []
	W0612 21:22:14.312094   56731 logs.go:278] No container was found matching "etcd"
	I0612 21:22:14.312101   56731 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:22:14.312167   56731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:22:14.351937   56731 cri.go:89] found id: ""
	I0612 21:22:14.351963   56731 logs.go:276] 0 containers: []
	W0612 21:22:14.351974   56731 logs.go:278] No container was found matching "coredns"
	I0612 21:22:14.351982   56731 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:22:14.352053   56731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:22:14.389557   56731 cri.go:89] found id: ""
	I0612 21:22:14.389581   56731 logs.go:276] 0 containers: []
	W0612 21:22:14.389589   56731 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:22:14.389595   56731 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:22:14.389644   56731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:22:14.430461   56731 cri.go:89] found id: ""
	I0612 21:22:14.430486   56731 logs.go:276] 0 containers: []
	W0612 21:22:14.430493   56731 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:22:14.430498   56731 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:22:14.430559   56731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:22:14.469368   56731 cri.go:89] found id: ""
	I0612 21:22:14.469397   56731 logs.go:276] 0 containers: []
	W0612 21:22:14.469406   56731 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:22:14.469412   56731 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:22:14.469469   56731 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:22:14.516540   56731 cri.go:89] found id: ""
	I0612 21:22:14.516575   56731 logs.go:276] 0 containers: []
	W0612 21:22:14.516586   56731 logs.go:278] No container was found matching "kindnet"
	I0612 21:22:14.516598   56731 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:22:14.516615   56731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:22:14.664376   56731 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:22:14.664404   56731 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:22:14.664422   56731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:22:14.812781   56731 logs.go:123] Gathering logs for container status ...
	I0612 21:22:14.812832   56731 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:22:14.864282   56731 logs.go:123] Gathering logs for kubelet ...
	I0612 21:22:14.864321   56731 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:22:14.926905   56731 logs.go:123] Gathering logs for dmesg ...
	I0612 21:22:14.926951   56731 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0612 21:22:14.940970   56731 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0612 21:22:14.941014   56731 out.go:239] * 
	* 
	W0612 21:22:14.941081   56731 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:22:14.941107   56731 out.go:239] * 
	* 
	W0612 21:22:14.941987   56731 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:22:14.945482   56731 out.go:177] 
	W0612 21:22:14.946806   56731 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:22:14.946868   56731 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0612 21:22:14.946900   56731 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0612 21:22:14.948527   56731 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-724108 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-724108
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-724108: (1.490029962s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-724108 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-724108 status --format={{.Host}}: exit status 7 (63.08255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-724108 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-724108 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.799085449s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-724108 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-724108 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-724108 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (77.484404ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-724108] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-724108
	    minikube start -p kubernetes-upgrade-724108 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7241082 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-724108 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-724108 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-724108 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.460025406s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-06-12 21:23:54.952973662 +0000 UTC m=+4386.567424032
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-724108 -n kubernetes-upgrade-724108
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-724108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-724108 logs -n 25: (1.58758556s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-732641          | force-systemd-flag-732641 | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC | 12 Jun 24 21:21 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-721096 sudo           | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-721096                | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC | 12 Jun 24 21:20 UTC |
	| start   | -p NoKubernetes-721096                | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC | 12 Jun 24 21:21 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-732641 ssh cat     | force-systemd-flag-732641 | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-732641          | force-systemd-flag-732641 | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	| start   | -p cert-expiration-112791             | cert-expiration-112791    | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-721096 sudo           | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-721096                | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	| start   | -p cert-options-449240                | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:22 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-719458             | running-upgrade-719458    | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	| start   | -p pause-037058 --memory=2048         | pause-037058              | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:22 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	| start   | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:23 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-449240 ssh               | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-449240 -- sudo        | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-449240                | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	| start   | -p stopped-upgrade-776864             | minikube                  | jenkins | v1.26.0 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:23 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p pause-037058                       | pause-037058              | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:23 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:23 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:23 UTC | 12 Jun 24 21:23 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-776864 stop           | minikube                  | jenkins | v1.26.0 | 12 Jun 24 21:23 UTC | 12 Jun 24 21:23 UTC |
	| start   | -p stopped-upgrade-776864             | stopped-upgrade-776864    | jenkins | v1.33.1 | 12 Jun 24 21:23 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-037058                       | pause-037058              | jenkins | v1.33.1 | 12 Jun 24 21:23 UTC | 12 Jun 24 21:23 UTC |
	| start   | -p auto-701638 --memory=3072          | auto-701638               | jenkins | v1.33.1 | 12 Jun 24 21:23 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:23:50
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:23:50.709677   65007 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:23:50.709824   65007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:23:50.709834   65007 out.go:304] Setting ErrFile to fd 2...
	I0612 21:23:50.709858   65007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:23:50.710497   65007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:23:50.711154   65007 out.go:298] Setting JSON to false
	I0612 21:23:50.712151   65007 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7576,"bootTime":1718219855,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:23:50.712213   65007 start.go:139] virtualization: kvm guest
	I0612 21:23:50.714473   65007 out.go:177] * [auto-701638] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:23:50.716056   65007 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:23:50.716127   65007 notify.go:220] Checking for updates...
	I0612 21:23:50.717326   65007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:23:50.718748   65007 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:23:50.720142   65007 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:23:50.721653   65007 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:23:50.723074   65007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:23:50.724917   65007 config.go:182] Loaded profile config "cert-expiration-112791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:23:50.725052   65007 config.go:182] Loaded profile config "kubernetes-upgrade-724108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:23:50.725200   65007 config.go:182] Loaded profile config "stopped-upgrade-776864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0612 21:23:50.725319   65007 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:23:50.766944   65007 out.go:177] * Using the kvm2 driver based on user configuration
	I0612 21:23:50.768549   65007 start.go:297] selected driver: kvm2
	I0612 21:23:50.768578   65007 start.go:901] validating driver "kvm2" against <nil>
	I0612 21:23:50.768598   65007 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:23:50.769636   65007 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:23:50.769730   65007 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:23:50.790883   65007 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:23:50.790933   65007 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 21:23:50.791130   65007 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:23:50.791199   65007 cni.go:84] Creating CNI manager for ""
	I0612 21:23:50.791212   65007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:23:50.791219   65007 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 21:23:50.791274   65007 start.go:340] cluster config:
	{Name:auto-701638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:auto-701638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:23:50.791367   65007 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:23:50.793369   65007 out.go:177] * Starting "auto-701638" primary control-plane node in "auto-701638" cluster
	I0612 21:23:49.018263   64327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:23:49.518445   64327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:23:49.538899   64327 api_server.go:72] duration metric: took 1.021548829s to wait for apiserver process to appear ...
	I0612 21:23:49.538928   64327 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:23:49.538945   64327 api_server.go:253] Checking apiserver healthz at https://192.168.50.31:8443/healthz ...
	I0612 21:23:51.981856   64327 api_server.go:279] https://192.168.50.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:23:51.981880   64327 api_server.go:103] status: https://192.168.50.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:23:51.981893   64327 api_server.go:253] Checking apiserver healthz at https://192.168.50.31:8443/healthz ...
	I0612 21:23:52.011184   64327 api_server.go:279] https://192.168.50.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:23:52.011217   64327 api_server.go:103] status: https://192.168.50.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:23:52.039373   64327 api_server.go:253] Checking apiserver healthz at https://192.168.50.31:8443/healthz ...
	I0612 21:23:52.050946   64327 api_server.go:279] https://192.168.50.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:23:52.050974   64327 api_server.go:103] status: https://192.168.50.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:23:52.539262   64327 api_server.go:253] Checking apiserver healthz at https://192.168.50.31:8443/healthz ...
	I0612 21:23:52.543406   64327 api_server.go:279] https://192.168.50.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:23:52.543476   64327 api_server.go:103] status: https://192.168.50.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:23:53.038987   64327 api_server.go:253] Checking apiserver healthz at https://192.168.50.31:8443/healthz ...
	I0612 21:23:53.048939   64327 api_server.go:279] https://192.168.50.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:23:53.049000   64327 api_server.go:103] status: https://192.168.50.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:23:53.539872   64327 api_server.go:253] Checking apiserver healthz at https://192.168.50.31:8443/healthz ...
	I0612 21:23:53.543988   64327 api_server.go:279] https://192.168.50.31:8443/healthz returned 200:
	ok
	I0612 21:23:53.549816   64327 api_server.go:141] control plane version: v1.30.1
	I0612 21:23:53.549837   64327 api_server.go:131] duration metric: took 4.010903438s to wait for apiserver health ...
	I0612 21:23:53.549845   64327 cni.go:84] Creating CNI manager for ""
	I0612 21:23:53.549851   64327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:23:53.551281   64327 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:23:53.552625   64327 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:23:53.564153   64327 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:23:53.582413   64327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:23:53.592241   64327 system_pods.go:59] 8 kube-system pods found
	I0612 21:23:53.592266   64327 system_pods.go:61] "coredns-7db6d8ff4d-54l7k" [dc593b3b-e9cd-4cff-b9b2-8c7c7cf0db52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:23:53.592274   64327 system_pods.go:61] "coredns-7db6d8ff4d-vhfcz" [7b632542-e6e9-4ae6-828c-9299276c6ae7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:23:53.592284   64327 system_pods.go:61] "etcd-kubernetes-upgrade-724108" [59c3fd1c-8558-4b00-8880-9019df5086ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:23:53.592291   64327 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-724108" [8a98115b-93e3-487d-9002-5cb8e594cafc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:23:53.592302   64327 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-724108" [175c99cf-2d82-4928-ba89-8211b21a549b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:23:53.592307   64327 system_pods.go:61] "kube-proxy-ssjq6" [c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 21:23:53.592315   64327 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-724108" [e49074c6-62ac-49d0-98a5-c5e361fbfcf1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:23:53.592320   64327 system_pods.go:61] "storage-provisioner" [1e6e8046-96c5-4ea9-9022-5b09a2617cec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:23:53.592328   64327 system_pods.go:74] duration metric: took 9.900524ms to wait for pod list to return data ...
	I0612 21:23:53.592333   64327 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:23:53.596119   64327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:23:53.596139   64327 node_conditions.go:123] node cpu capacity is 2
	I0612 21:23:53.596148   64327 node_conditions.go:105] duration metric: took 3.810874ms to run NodePressure ...
	I0612 21:23:53.596165   64327 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:23:53.907826   64327 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:23:53.921209   64327 ops.go:34] apiserver oom_adj: -16
	I0612 21:23:53.921244   64327 kubeadm.go:591] duration metric: took 22.132886404s to restartPrimaryControlPlane
	I0612 21:23:53.921257   64327 kubeadm.go:393] duration metric: took 22.264553442s to StartCluster
	I0612 21:23:53.921279   64327 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:23:53.921383   64327 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:23:53.922558   64327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:23:53.922818   64327 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:23:53.925510   64327 out.go:177] * Verifying Kubernetes components...
	I0612 21:23:53.922888   64327 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:23:53.923021   64327 config.go:182] Loaded profile config "kubernetes-upgrade-724108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:23:53.926864   64327 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-724108"
	I0612 21:23:53.926902   64327 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-724108"
	W0612 21:23:53.926911   64327 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:23:53.926937   64327 host.go:66] Checking if "kubernetes-upgrade-724108" exists ...
	I0612 21:23:53.926868   64327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:23:53.926873   64327 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-724108"
	I0612 21:23:53.927062   64327 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-724108"
	I0612 21:23:53.927322   64327 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:53.927367   64327 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:53.927372   64327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:53.927400   64327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:53.944327   64327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38005
	I0612 21:23:53.944872   64327 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:53.945383   64327 main.go:141] libmachine: Using API Version  1
	I0612 21:23:53.945403   64327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:53.945685   64327 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:53.946146   64327 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:53.946185   64327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:53.948478   64327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I0612 21:23:53.948980   64327 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:53.949558   64327 main.go:141] libmachine: Using API Version  1
	I0612 21:23:53.949587   64327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:53.949906   64327 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:53.950080   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetState
	I0612 21:23:53.952785   64327 kapi.go:59] client config for kubernetes-upgrade-724108: &rest.Config{Host:"https://192.168.50.31:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/client.crt", KeyFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kubernetes-upgrade-724108/client.key", CAFile:"/home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfb000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 21:23:53.953104   64327 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-724108"
	W0612 21:23:53.953124   64327 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:23:53.953152   64327 host.go:66] Checking if "kubernetes-upgrade-724108" exists ...
	I0612 21:23:53.953526   64327 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:53.953572   64327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:53.963057   64327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42893
	I0612 21:23:53.963498   64327 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:53.964012   64327 main.go:141] libmachine: Using API Version  1
	I0612 21:23:53.964039   64327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:53.964386   64327 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:53.964648   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetState
	I0612 21:23:53.966368   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .DriverName
	I0612 21:23:53.968467   64327 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:23:51.920578   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:51.921201   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:51.921226   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:51.921148   64659 retry.go:31] will retry after 2.958888125s: waiting for machine to come up
	I0612 21:23:53.969889   64327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0612 21:23:53.969922   64327 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:23:53.969935   64327 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:23:53.969949   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:23:53.970431   64327 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:53.971084   64327 main.go:141] libmachine: Using API Version  1
	I0612 21:23:53.971107   64327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:53.971602   64327 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:53.972214   64327 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:53.972260   64327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:53.972974   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:23:53.973385   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:23:53.973413   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:23:53.973622   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:23:53.973754   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:23:53.973975   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:23:53.974092   64327 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/id_rsa Username:docker}
	I0612 21:23:53.988147   64327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0612 21:23:53.988577   64327 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:53.989026   64327 main.go:141] libmachine: Using API Version  1
	I0612 21:23:53.989042   64327 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:53.989336   64327 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:53.989581   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetState
	I0612 21:23:53.991198   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .DriverName
	I0612 21:23:53.991427   64327 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:23:53.991440   64327 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:23:53.991460   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHHostname
	I0612 21:23:53.994675   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:23:53.995025   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:aa:0a", ip: ""} in network mk-kubernetes-upgrade-724108: {Iface:virbr2 ExpiryTime:2024-06-12 22:18:00 +0000 UTC Type:0 Mac:52:54:00:f7:aa:0a Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-724108 Clientid:01:52:54:00:f7:aa:0a}
	I0612 21:23:53.995054   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | domain kubernetes-upgrade-724108 has defined IP address 192.168.50.31 and MAC address 52:54:00:f7:aa:0a in network mk-kubernetes-upgrade-724108
	I0612 21:23:53.995155   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHPort
	I0612 21:23:53.995330   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHKeyPath
	I0612 21:23:53.995475   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .GetSSHUsername
	I0612 21:23:53.995739   64327 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/kubernetes-upgrade-724108/id_rsa Username:docker}
	I0612 21:23:54.153559   64327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:23:54.168765   64327 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:23:54.168848   64327 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:23:54.186482   64327 api_server.go:72] duration metric: took 263.624093ms to wait for apiserver process to appear ...
	I0612 21:23:54.186507   64327 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:23:54.186528   64327 api_server.go:253] Checking apiserver healthz at https://192.168.50.31:8443/healthz ...
	I0612 21:23:54.192685   64327 api_server.go:279] https://192.168.50.31:8443/healthz returned 200:
	ok
	I0612 21:23:54.193605   64327 api_server.go:141] control plane version: v1.30.1
	I0612 21:23:54.193629   64327 api_server.go:131] duration metric: took 7.113736ms to wait for apiserver health ...
	I0612 21:23:54.193640   64327 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:23:54.200303   64327 system_pods.go:59] 8 kube-system pods found
	I0612 21:23:54.200333   64327 system_pods.go:61] "coredns-7db6d8ff4d-54l7k" [dc593b3b-e9cd-4cff-b9b2-8c7c7cf0db52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:23:54.200343   64327 system_pods.go:61] "coredns-7db6d8ff4d-vhfcz" [7b632542-e6e9-4ae6-828c-9299276c6ae7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:23:54.200355   64327 system_pods.go:61] "etcd-kubernetes-upgrade-724108" [59c3fd1c-8558-4b00-8880-9019df5086ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:23:54.200362   64327 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-724108" [8a98115b-93e3-487d-9002-5cb8e594cafc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:23:54.200373   64327 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-724108" [175c99cf-2d82-4928-ba89-8211b21a549b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:23:54.200390   64327 system_pods.go:61] "kube-proxy-ssjq6" [c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee] Running
	I0612 21:23:54.200399   64327 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-724108" [e49074c6-62ac-49d0-98a5-c5e361fbfcf1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:23:54.200404   64327 system_pods.go:61] "storage-provisioner" [1e6e8046-96c5-4ea9-9022-5b09a2617cec] Running
	I0612 21:23:54.200411   64327 system_pods.go:74] duration metric: took 6.763082ms to wait for pod list to return data ...
	I0612 21:23:54.200423   64327 kubeadm.go:576] duration metric: took 277.570978ms to wait for: map[apiserver:true system_pods:true]
	I0612 21:23:54.200436   64327 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:23:54.202840   64327 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:23:54.202865   64327 node_conditions.go:123] node cpu capacity is 2
	I0612 21:23:54.202876   64327 node_conditions.go:105] duration metric: took 2.434012ms to run NodePressure ...
	I0612 21:23:54.202889   64327 start.go:240] waiting for startup goroutines ...
	I0612 21:23:54.242144   64327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:23:54.265769   64327 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:23:54.881358   64327 main.go:141] libmachine: Making call to close driver server
	I0612 21:23:54.881385   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .Close
	I0612 21:23:54.881666   64327 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:23:54.881681   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Closing plugin on server side
	I0612 21:23:54.881688   64327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:23:54.881700   64327 main.go:141] libmachine: Making call to close driver server
	I0612 21:23:54.881709   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .Close
	I0612 21:23:54.881689   64327 main.go:141] libmachine: Making call to close driver server
	I0612 21:23:54.881730   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .Close
	I0612 21:23:54.881992   64327 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:23:54.882010   64327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:23:54.882021   64327 main.go:141] libmachine: Making call to close driver server
	I0612 21:23:54.882029   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .Close
	I0612 21:23:54.882118   64327 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:23:54.882130   64327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:23:54.882227   64327 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:23:54.882249   64327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:23:54.887953   64327 main.go:141] libmachine: Making call to close driver server
	I0612 21:23:54.887969   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) Calling .Close
	I0612 21:23:54.888205   64327 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:23:54.888231   64327 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:23:54.888248   64327 main.go:141] libmachine: (kubernetes-upgrade-724108) DBG | Closing plugin on server side
	I0612 21:23:54.889895   64327 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0612 21:23:54.891038   64327 addons.go:510] duration metric: took 968.16034ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0612 21:23:54.891068   64327 start.go:245] waiting for cluster config update ...
	I0612 21:23:54.891080   64327 start.go:254] writing updated cluster config ...
	I0612 21:23:54.891344   64327 ssh_runner.go:195] Run: rm -f paused
	I0612 21:23:54.937617   64327 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:23:54.939300   64327 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-724108" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.611426958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227435611397217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c815988c-77fa-4877-97d1-866fd696cb12 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.612322895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17b4b51a-695e-40bc-850c-110aeadbe895 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.612513871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17b4b51a-695e-40bc-850c-110aeadbe895 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.612990050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5dbc7b120727a15d828eba8ba95c3daf748ed7121ae63e3926195fc7dcbe56f,PodSandboxId:a0987616bed850ad1c08b6149a9237f4c5811f99f5c37027310673a6dc64b04f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227432797931335,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssjq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84b38528,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3b52c9a1faa48345dafcb5ee2ac4041585bbda54b755b689d6ac0aee2f0ba0,PodSandboxId:059a5eb9545c41d4e468c1f5f9c117f66fa00756992fb142261735a8985ea0ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227432832639926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-54l7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc593b3b-e9cd-4cff-b9b2-8c7c7cf0db52,},Annotations:map[string]string{io.kubernetes.container.hash: 480877de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed32adda6c6b7be0d8f4a42bc4f8dbae192074349a50c1c1a828c6f63fe1e0ff,PodSandboxId:198e2ad1cffb74b0533f1acf46aa96dc4f706a886c559e9a5ea59fb88f10a125,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718227432826125333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e6e8046-96c5-4ea9-9022-5b09a2617cec,},Annotations:map[string]string{io.kubernetes.container.hash: 33f7d230,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b79b8ccffc07fe65b331afa1c40c3b9f7411f336e0d6ba460f6cc73351e6cb1,PodSandboxId:dbc801792515792466eb6fce964ba7c3ef913ff0f68036d489ff7883e378a544,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227432812127689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vhfcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b632542-e6e9-4ae6-828c-92
99276c6ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 4104c976,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdf23fc510f606fa07784b8e92bc3d05acac6e068d15f90c21c5b7c0011be59,PodSandboxId:2692ccccf391b4c8ffe354408715940519561ecffebab1ed8a4d690c0faa7ac8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227429065976710,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372f3393746ef7b5340de8fef06750a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a57f3dc8214c241b2f517676c13c62d844ecebe3e4ebf412e8b87994eaff1,PodSandboxId:b464d0ea1c55bf3c124959cf37c5256d9c8339cd5ace82d8ee1575109446ef7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227429042837
907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4063507a1f83b98643c19acf6ec69cf,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de477c159e43d30200bc2a41d4d7e60d47c6a8481e9006b175aaeb04560adfa6,PodSandboxId:fd3bc63dad92024279fb63a4a0744ec278cc85d6e28b4be129b8c749d33cfcea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171
8227429037124729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39afb5594f40a9577379988d9b297a5,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637ee5f338c22718d248895d3838f3311a64fbce786391cc05fad28b1673ccba,PodSandboxId:43f79cf52a38cc20b39993ff9a580fb7bfa9c29ed8908911a2380ef423400316,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171822742674829835
4,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92836c158f3d15e314e49137598bc383,},Annotations:map[string]string{io.kubernetes.container.hash: e6ccd716,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c1806d07405fc3f7c690bb6e8474ea72796eb8626d66955b2199ed27d2d3d2,PodSandboxId:198e2ad1cffb74b0533f1acf46aa96dc4f706a886c559e9a5ea59fb88f10a125,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718227424750125368,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6e8046-96c5-4ea9-9022-5b09a2617cec,},Annotations:map[string]string{io.kubernetes.container.hash: 33f7d230,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c589670fb68dcea8f071b2cc4f3f8b23cb69d6450301f3d2bbd3dc302875f,PodSandboxId:dbc801792515792466eb6fce964ba7c3ef913ff0f68036d489ff7883e378a544,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227411400304608,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vhfcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b632542-e6e9-4ae6-828c-9299276c6ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 4104c976,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:497bf75c64a8bca3301f17325a843584428a5930c49f49564824a4d54c2f297a,PodSandboxId:059a5eb9545c41d4e468c1f5f9c117f66fa00756992fb142261735a8985ea0ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227411370403478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-54l7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc593b3b-e9cd-4cff-b9b2-8c7c7cf0db52,},Annotations:map[string]string{io.kubernetes.container.hash: 480877de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e526608624f480c35c1cb49aa760bf06c44a01b1e1c60f3f19f8f31061ba885,PodSandboxId:f1dfe30a51e57eb8dcde0f803450f08c786aac4bccd5e992a1f6be48d0f
fe493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227408029991974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssjq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84b38528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38168a764cb804d1b8bc6d54a1d6e00cbeefbff0d81d5fc5c0b2d2a4dd2f974f,PodSandboxId:257897b8bc17effb16e1a656e8caea6d0861b5df1ccae6b34fb13e720b4eb08c,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227407873823102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92836c158f3d15e314e49137598bc383,},Annotations:map[string]string{io.kubernetes.container.hash: e6ccd716,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75332419696c294fd56c370fe43f18d272a90e0690a71e0648c9428d3efe56f8,PodSandboxId:9c6fe414048aab5f8f42465852274cb11fed9b96da47c82c18196ad4085efd6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Imag
e:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227407774903289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39afb5594f40a9577379988d9b297a5,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131829b63284781bcf2e27b4470daf45c0f0d4470d0c451df4a04b0513f786f,PodSandboxId:51f5389cc17692e27247a8467ebf9777d423efbfde81844dd7bec262ca30ed07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},I
mage:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227407673911620,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4063507a1f83b98643c19acf6ec69cf,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cd5a7f3adbe9f9ce0558b2ef5a03163eb5d69628e1e4de04fc031c5bb8b0cc7,PodSandboxId:6e3beeb69fcc3fc835ab310e8582b6f511615f84574bf2b4976d23b79661b1e0,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227407640466934,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372f3393746ef7b5340de8fef06750a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17b4b51a-695e-40bc-850c-110aeadbe895 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.662558349Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35395b3f-88af-4719-8e3d-9c7b55d6b35e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.662651981Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35395b3f-88af-4719-8e3d-9c7b55d6b35e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.664082546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ce7e06e-3455-43c0-be07-6ba67a0e9cf2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.664582407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227435664559277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ce7e06e-3455-43c0-be07-6ba67a0e9cf2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.665467283Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=871670af-ec64-414d-824b-a23f594ceb7c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.665540690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=871670af-ec64-414d-824b-a23f594ceb7c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.665849899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5dbc7b120727a15d828eba8ba95c3daf748ed7121ae63e3926195fc7dcbe56f,PodSandboxId:a0987616bed850ad1c08b6149a9237f4c5811f99f5c37027310673a6dc64b04f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227432797931335,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssjq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84b38528,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3b52c9a1faa48345dafcb5ee2ac4041585bbda54b755b689d6ac0aee2f0ba0,PodSandboxId:059a5eb9545c41d4e468c1f5f9c117f66fa00756992fb142261735a8985ea0ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227432832639926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-54l7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc593b3b-e9cd-4cff-b9b2-8c7c7cf0db52,},Annotations:map[string]string{io.kubernetes.container.hash: 480877de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed32adda6c6b7be0d8f4a42bc4f8dbae192074349a50c1c1a828c6f63fe1e0ff,PodSandboxId:198e2ad1cffb74b0533f1acf46aa96dc4f706a886c559e9a5ea59fb88f10a125,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718227432826125333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e6e8046-96c5-4ea9-9022-5b09a2617cec,},Annotations:map[string]string{io.kubernetes.container.hash: 33f7d230,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b79b8ccffc07fe65b331afa1c40c3b9f7411f336e0d6ba460f6cc73351e6cb1,PodSandboxId:dbc801792515792466eb6fce964ba7c3ef913ff0f68036d489ff7883e378a544,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227432812127689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vhfcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b632542-e6e9-4ae6-828c-92
99276c6ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 4104c976,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdf23fc510f606fa07784b8e92bc3d05acac6e068d15f90c21c5b7c0011be59,PodSandboxId:2692ccccf391b4c8ffe354408715940519561ecffebab1ed8a4d690c0faa7ac8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227429065976710,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372f3393746ef7b5340de8fef06750a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a57f3dc8214c241b2f517676c13c62d844ecebe3e4ebf412e8b87994eaff1,PodSandboxId:b464d0ea1c55bf3c124959cf37c5256d9c8339cd5ace82d8ee1575109446ef7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227429042837
907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4063507a1f83b98643c19acf6ec69cf,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de477c159e43d30200bc2a41d4d7e60d47c6a8481e9006b175aaeb04560adfa6,PodSandboxId:fd3bc63dad92024279fb63a4a0744ec278cc85d6e28b4be129b8c749d33cfcea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171
8227429037124729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39afb5594f40a9577379988d9b297a5,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637ee5f338c22718d248895d3838f3311a64fbce786391cc05fad28b1673ccba,PodSandboxId:43f79cf52a38cc20b39993ff9a580fb7bfa9c29ed8908911a2380ef423400316,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171822742674829835
4,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92836c158f3d15e314e49137598bc383,},Annotations:map[string]string{io.kubernetes.container.hash: e6ccd716,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c1806d07405fc3f7c690bb6e8474ea72796eb8626d66955b2199ed27d2d3d2,PodSandboxId:198e2ad1cffb74b0533f1acf46aa96dc4f706a886c559e9a5ea59fb88f10a125,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718227424750125368,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6e8046-96c5-4ea9-9022-5b09a2617cec,},Annotations:map[string]string{io.kubernetes.container.hash: 33f7d230,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c589670fb68dcea8f071b2cc4f3f8b23cb69d6450301f3d2bbd3dc302875f,PodSandboxId:dbc801792515792466eb6fce964ba7c3ef913ff0f68036d489ff7883e378a544,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227411400304608,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vhfcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b632542-e6e9-4ae6-828c-9299276c6ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 4104c976,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:497bf75c64a8bca3301f17325a843584428a5930c49f49564824a4d54c2f297a,PodSandboxId:059a5eb9545c41d4e468c1f5f9c117f66fa00756992fb142261735a8985ea0ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227411370403478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-54l7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc593b3b-e9cd-4cff-b9b2-8c7c7cf0db52,},Annotations:map[string]string{io.kubernetes.container.hash: 480877de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e526608624f480c35c1cb49aa760bf06c44a01b1e1c60f3f19f8f31061ba885,PodSandboxId:f1dfe30a51e57eb8dcde0f803450f08c786aac4bccd5e992a1f6be48d0f
fe493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227408029991974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssjq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84b38528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38168a764cb804d1b8bc6d54a1d6e00cbeefbff0d81d5fc5c0b2d2a4dd2f974f,PodSandboxId:257897b8bc17effb16e1a656e8caea6d0861b5df1ccae6b34fb13e720b4eb08c,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227407873823102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92836c158f3d15e314e49137598bc383,},Annotations:map[string]string{io.kubernetes.container.hash: e6ccd716,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75332419696c294fd56c370fe43f18d272a90e0690a71e0648c9428d3efe56f8,PodSandboxId:9c6fe414048aab5f8f42465852274cb11fed9b96da47c82c18196ad4085efd6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Imag
e:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227407774903289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39afb5594f40a9577379988d9b297a5,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131829b63284781bcf2e27b4470daf45c0f0d4470d0c451df4a04b0513f786f,PodSandboxId:51f5389cc17692e27247a8467ebf9777d423efbfde81844dd7bec262ca30ed07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},I
mage:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227407673911620,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4063507a1f83b98643c19acf6ec69cf,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cd5a7f3adbe9f9ce0558b2ef5a03163eb5d69628e1e4de04fc031c5bb8b0cc7,PodSandboxId:6e3beeb69fcc3fc835ab310e8582b6f511615f84574bf2b4976d23b79661b1e0,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227407640466934,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372f3393746ef7b5340de8fef06750a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=871670af-ec64-414d-824b-a23f594ceb7c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.710842369Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0640321a-f9ba-41d3-86c1-a2b0dec1ccae name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.711259570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0640321a-f9ba-41d3-86c1-a2b0dec1ccae name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.712540574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb34c73f-cefe-4970-a9ed-afa72a3bdaf3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.712884164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227435712864776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb34c73f-cefe-4970-a9ed-afa72a3bdaf3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.713677887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66a4e104-80c0-4898-9e99-0f8879513703 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.713732482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66a4e104-80c0-4898-9e99-0f8879513703 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.714062326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5dbc7b120727a15d828eba8ba95c3daf748ed7121ae63e3926195fc7dcbe56f,PodSandboxId:a0987616bed850ad1c08b6149a9237f4c5811f99f5c37027310673a6dc64b04f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227432797931335,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssjq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84b38528,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3b52c9a1faa48345dafcb5ee2ac4041585bbda54b755b689d6ac0aee2f0ba0,PodSandboxId:059a5eb9545c41d4e468c1f5f9c117f66fa00756992fb142261735a8985ea0ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227432832639926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-54l7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc593b3b-e9cd-4cff-b9b2-8c7c7cf0db52,},Annotations:map[string]string{io.kubernetes.container.hash: 480877de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed32adda6c6b7be0d8f4a42bc4f8dbae192074349a50c1c1a828c6f63fe1e0ff,PodSandboxId:198e2ad1cffb74b0533f1acf46aa96dc4f706a886c559e9a5ea59fb88f10a125,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718227432826125333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e6e8046-96c5-4ea9-9022-5b09a2617cec,},Annotations:map[string]string{io.kubernetes.container.hash: 33f7d230,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b79b8ccffc07fe65b331afa1c40c3b9f7411f336e0d6ba460f6cc73351e6cb1,PodSandboxId:dbc801792515792466eb6fce964ba7c3ef913ff0f68036d489ff7883e378a544,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227432812127689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vhfcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b632542-e6e9-4ae6-828c-92
99276c6ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 4104c976,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdf23fc510f606fa07784b8e92bc3d05acac6e068d15f90c21c5b7c0011be59,PodSandboxId:2692ccccf391b4c8ffe354408715940519561ecffebab1ed8a4d690c0faa7ac8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227429065976710,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372f3393746ef7b5340de8fef06750a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a57f3dc8214c241b2f517676c13c62d844ecebe3e4ebf412e8b87994eaff1,PodSandboxId:b464d0ea1c55bf3c124959cf37c5256d9c8339cd5ace82d8ee1575109446ef7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227429042837
907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4063507a1f83b98643c19acf6ec69cf,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de477c159e43d30200bc2a41d4d7e60d47c6a8481e9006b175aaeb04560adfa6,PodSandboxId:fd3bc63dad92024279fb63a4a0744ec278cc85d6e28b4be129b8c749d33cfcea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171
8227429037124729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39afb5594f40a9577379988d9b297a5,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637ee5f338c22718d248895d3838f3311a64fbce786391cc05fad28b1673ccba,PodSandboxId:43f79cf52a38cc20b39993ff9a580fb7bfa9c29ed8908911a2380ef423400316,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171822742674829835
4,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92836c158f3d15e314e49137598bc383,},Annotations:map[string]string{io.kubernetes.container.hash: e6ccd716,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c1806d07405fc3f7c690bb6e8474ea72796eb8626d66955b2199ed27d2d3d2,PodSandboxId:198e2ad1cffb74b0533f1acf46aa96dc4f706a886c559e9a5ea59fb88f10a125,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718227424750125368,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6e8046-96c5-4ea9-9022-5b09a2617cec,},Annotations:map[string]string{io.kubernetes.container.hash: 33f7d230,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c589670fb68dcea8f071b2cc4f3f8b23cb69d6450301f3d2bbd3dc302875f,PodSandboxId:dbc801792515792466eb6fce964ba7c3ef913ff0f68036d489ff7883e378a544,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227411400304608,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vhfcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b632542-e6e9-4ae6-828c-9299276c6ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 4104c976,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:497bf75c64a8bca3301f17325a843584428a5930c49f49564824a4d54c2f297a,PodSandboxId:059a5eb9545c41d4e468c1f5f9c117f66fa00756992fb142261735a8985ea0ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227411370403478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-54l7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc593b3b-e9cd-4cff-b9b2-8c7c7cf0db52,},Annotations:map[string]string{io.kubernetes.container.hash: 480877de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e526608624f480c35c1cb49aa760bf06c44a01b1e1c60f3f19f8f31061ba885,PodSandboxId:f1dfe30a51e57eb8dcde0f803450f08c786aac4bccd5e992a1f6be48d0f
fe493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227408029991974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssjq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84b38528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38168a764cb804d1b8bc6d54a1d6e00cbeefbff0d81d5fc5c0b2d2a4dd2f974f,PodSandboxId:257897b8bc17effb16e1a656e8caea6d0861b5df1ccae6b34fb13e720b4eb08c,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227407873823102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92836c158f3d15e314e49137598bc383,},Annotations:map[string]string{io.kubernetes.container.hash: e6ccd716,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75332419696c294fd56c370fe43f18d272a90e0690a71e0648c9428d3efe56f8,PodSandboxId:9c6fe414048aab5f8f42465852274cb11fed9b96da47c82c18196ad4085efd6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Imag
e:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227407774903289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39afb5594f40a9577379988d9b297a5,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131829b63284781bcf2e27b4470daf45c0f0d4470d0c451df4a04b0513f786f,PodSandboxId:51f5389cc17692e27247a8467ebf9777d423efbfde81844dd7bec262ca30ed07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},I
mage:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227407673911620,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4063507a1f83b98643c19acf6ec69cf,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cd5a7f3adbe9f9ce0558b2ef5a03163eb5d69628e1e4de04fc031c5bb8b0cc7,PodSandboxId:6e3beeb69fcc3fc835ab310e8582b6f511615f84574bf2b4976d23b79661b1e0,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227407640466934,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372f3393746ef7b5340de8fef06750a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66a4e104-80c0-4898-9e99-0f8879513703 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.747541857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8e983af-125a-4d36-ae6d-ec13501709bb name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.747633965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8e983af-125a-4d36-ae6d-ec13501709bb name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.749405430Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c55bc234-8c80-49ce-9969-6f183f8151b1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.749768768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227435749745693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c55bc234-8c80-49ce-9969-6f183f8151b1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.750252026Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d408131e-6f5d-4c03-8758-b909622ef010 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.750323763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d408131e-6f5d-4c03-8758-b909622ef010 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:55 kubernetes-upgrade-724108 crio[2979]: time="2024-06-12 21:23:55.750676972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5dbc7b120727a15d828eba8ba95c3daf748ed7121ae63e3926195fc7dcbe56f,PodSandboxId:a0987616bed850ad1c08b6149a9237f4c5811f99f5c37027310673a6dc64b04f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227432797931335,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssjq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84b38528,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3b52c9a1faa48345dafcb5ee2ac4041585bbda54b755b689d6ac0aee2f0ba0,PodSandboxId:059a5eb9545c41d4e468c1f5f9c117f66fa00756992fb142261735a8985ea0ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227432832639926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-54l7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc593b3b-e9cd-4cff-b9b2-8c7c7cf0db52,},Annotations:map[string]string{io.kubernetes.container.hash: 480877de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed32adda6c6b7be0d8f4a42bc4f8dbae192074349a50c1c1a828c6f63fe1e0ff,PodSandboxId:198e2ad1cffb74b0533f1acf46aa96dc4f706a886c559e9a5ea59fb88f10a125,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718227432826125333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e6e8046-96c5-4ea9-9022-5b09a2617cec,},Annotations:map[string]string{io.kubernetes.container.hash: 33f7d230,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b79b8ccffc07fe65b331afa1c40c3b9f7411f336e0d6ba460f6cc73351e6cb1,PodSandboxId:dbc801792515792466eb6fce964ba7c3ef913ff0f68036d489ff7883e378a544,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227432812127689,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vhfcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b632542-e6e9-4ae6-828c-92
99276c6ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 4104c976,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdf23fc510f606fa07784b8e92bc3d05acac6e068d15f90c21c5b7c0011be59,PodSandboxId:2692ccccf391b4c8ffe354408715940519561ecffebab1ed8a4d690c0faa7ac8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227429065976710,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372f3393746ef7b5340de8fef06750a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a57f3dc8214c241b2f517676c13c62d844ecebe3e4ebf412e8b87994eaff1,PodSandboxId:b464d0ea1c55bf3c124959cf37c5256d9c8339cd5ace82d8ee1575109446ef7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227429042837
907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4063507a1f83b98643c19acf6ec69cf,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de477c159e43d30200bc2a41d4d7e60d47c6a8481e9006b175aaeb04560adfa6,PodSandboxId:fd3bc63dad92024279fb63a4a0744ec278cc85d6e28b4be129b8c749d33cfcea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:171
8227429037124729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39afb5594f40a9577379988d9b297a5,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637ee5f338c22718d248895d3838f3311a64fbce786391cc05fad28b1673ccba,PodSandboxId:43f79cf52a38cc20b39993ff9a580fb7bfa9c29ed8908911a2380ef423400316,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171822742674829835
4,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92836c158f3d15e314e49137598bc383,},Annotations:map[string]string{io.kubernetes.container.hash: e6ccd716,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c1806d07405fc3f7c690bb6e8474ea72796eb8626d66955b2199ed27d2d3d2,PodSandboxId:198e2ad1cffb74b0533f1acf46aa96dc4f706a886c559e9a5ea59fb88f10a125,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718227424750125368,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6e8046-96c5-4ea9-9022-5b09a2617cec,},Annotations:map[string]string{io.kubernetes.container.hash: 33f7d230,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c589670fb68dcea8f071b2cc4f3f8b23cb69d6450301f3d2bbd3dc302875f,PodSandboxId:dbc801792515792466eb6fce964ba7c3ef913ff0f68036d489ff7883e378a544,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227411400304608,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vhfcz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b632542-e6e9-4ae6-828c-9299276c6ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 4104c976,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:497bf75c64a8bca3301f17325a843584428a5930c49f49564824a4d54c2f297a,PodSandboxId:059a5eb9545c41d4e468c1f5f9c117f66fa00756992fb142261735a8985ea0ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227411370403478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-54l7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc593b3b-e9cd-4cff-b9b2-8c7c7cf0db52,},Annotations:map[string]string{io.kubernetes.container.hash: 480877de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e526608624f480c35c1cb49aa760bf06c44a01b1e1c60f3f19f8f31061ba885,PodSandboxId:f1dfe30a51e57eb8dcde0f803450f08c786aac4bccd5e992a1f6be48d0f
fe493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227408029991974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ssjq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84b38528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38168a764cb804d1b8bc6d54a1d6e00cbeefbff0d81d5fc5c0b2d2a4dd2f974f,PodSandboxId:257897b8bc17effb16e1a656e8caea6d0861b5df1ccae6b34fb13e720b4eb08c,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227407873823102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92836c158f3d15e314e49137598bc383,},Annotations:map[string]string{io.kubernetes.container.hash: e6ccd716,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75332419696c294fd56c370fe43f18d272a90e0690a71e0648c9428d3efe56f8,PodSandboxId:9c6fe414048aab5f8f42465852274cb11fed9b96da47c82c18196ad4085efd6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Imag
e:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227407774903289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e39afb5594f40a9577379988d9b297a5,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6131829b63284781bcf2e27b4470daf45c0f0d4470d0c451df4a04b0513f786f,PodSandboxId:51f5389cc17692e27247a8467ebf9777d423efbfde81844dd7bec262ca30ed07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},I
mage:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227407673911620,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4063507a1f83b98643c19acf6ec69cf,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cd5a7f3adbe9f9ce0558b2ef5a03163eb5d69628e1e4de04fc031c5bb8b0cc7,PodSandboxId:6e3beeb69fcc3fc835ab310e8582b6f511615f84574bf2b4976d23b79661b1e0,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227407640466934,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-724108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372f3393746ef7b5340de8fef06750a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d408131e-6f5d-4c03-8758-b909622ef010 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da3b52c9a1faa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   059a5eb9545c4       coredns-7db6d8ff4d-54l7k
	ed32adda6c6b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   198e2ad1cffb7       storage-provisioner
	5b79b8ccffc07       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   dbc8017925157       coredns-7db6d8ff4d-vhfcz
	f5dbc7b120727       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   3 seconds ago       Running             kube-proxy                2                   a0987616bed85       kube-proxy-ssjq6
	8cdf23fc510f6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   6 seconds ago       Running             kube-scheduler            2                   2692ccccf391b       kube-scheduler-kubernetes-upgrade-724108
	b90a57f3dc821       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   6 seconds ago       Running             kube-controller-manager   2                   b464d0ea1c55b       kube-controller-manager-kubernetes-upgrade-724108
	de477c159e43d       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   6 seconds ago       Running             kube-apiserver            2                   fd3bc63dad920       kube-apiserver-kubernetes-upgrade-724108
	637ee5f338c22       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 seconds ago       Running             etcd                      2                   43f79cf52a38c       etcd-kubernetes-upgrade-724108
	06c1806d07405       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Exited              storage-provisioner       2                   198e2ad1cffb7       storage-provisioner
	415c589670fb6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   dbc8017925157       coredns-7db6d8ff4d-vhfcz
	497bf75c64a8b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   059a5eb9545c4       coredns-7db6d8ff4d-54l7k
	9e526608624f4       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   27 seconds ago      Exited              kube-proxy                1                   f1dfe30a51e57       kube-proxy-ssjq6
	38168a764cb80       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   27 seconds ago      Exited              etcd                      1                   257897b8bc17e       etcd-kubernetes-upgrade-724108
	75332419696c2       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   28 seconds ago      Exited              kube-apiserver            1                   9c6fe414048aa       kube-apiserver-kubernetes-upgrade-724108
	6131829b63284       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   28 seconds ago      Exited              kube-controller-manager   1                   51f5389cc1769       kube-controller-manager-kubernetes-upgrade-724108
	8cd5a7f3adbe9       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   28 seconds ago      Exited              kube-scheduler            1                   6e3beeb69fcc3       kube-scheduler-kubernetes-upgrade-724108
	
	
	==> coredns [415c589670fb68dcea8f071b2cc4f3f8b23cb69d6450301f3d2bbd3dc302875f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [497bf75c64a8bca3301f17325a843584428a5930c49f49564824a4d54c2f297a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5b79b8ccffc07fe65b331afa1c40c3b9f7411f336e0d6ba460f6cc73351e6cb1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [da3b52c9a1faa48345dafcb5ee2ac4041585bbda54b755b689d6ac0aee2f0ba0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-724108
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-724108
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:22:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-724108
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:23:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:23:52 +0000   Wed, 12 Jun 2024 21:22:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:23:52 +0000   Wed, 12 Jun 2024 21:22:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:23:52 +0000   Wed, 12 Jun 2024 21:22:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:23:52 +0000   Wed, 12 Jun 2024 21:22:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.31
	  Hostname:    kubernetes-upgrade-724108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 169c511eaad54cc0b21c439a9e557405
	  System UUID:                169c511e-aad5-4cc0-b21c-439a9e557405
	  Boot ID:                    50897706-c206-4ca9-8e84-eb06f03b4271
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-54l7k                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     44s
	  kube-system                 coredns-7db6d8ff4d-vhfcz                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     44s
	  kube-system                 etcd-kubernetes-upgrade-724108                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         51s
	  kube-system                 kube-apiserver-kubernetes-upgrade-724108             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-724108    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-proxy-ssjq6                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kube-scheduler-kubernetes-upgrade-724108             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node kubernetes-upgrade-724108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x7 over 65s)  kubelet          Node kubernetes-upgrade-724108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node kubernetes-upgrade-724108 status is now: NodeHasSufficientMemory
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           44s                node-controller  Node kubernetes-upgrade-724108 event: Registered Node kubernetes-upgrade-724108 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-724108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-724108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-724108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.001826] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.068659] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068410] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.198299] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.135970] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.292891] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +4.672278] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.062431] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.585050] systemd-fstab-generator[857]: Ignoring "noauto" option for root device
	[Jun12 21:23] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	[  +0.099742] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.345634] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.314824] systemd-fstab-generator[2188]: Ignoring "noauto" option for root device
	[  +0.112323] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.081502] systemd-fstab-generator[2200]: Ignoring "noauto" option for root device
	[  +0.263613] systemd-fstab-generator[2214]: Ignoring "noauto" option for root device
	[  +0.187423] systemd-fstab-generator[2226]: Ignoring "noauto" option for root device
	[  +1.537128] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +1.560950] systemd-fstab-generator[3189]: Ignoring "noauto" option for root device
	[  +1.388897] kauditd_printk_skb: 290 callbacks suppressed
	[ +16.693610] systemd-fstab-generator[3999]: Ignoring "noauto" option for root device
	[  +0.121836] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.118494] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.615748] systemd-fstab-generator[4508]: Ignoring "noauto" option for root device
	
	
	==> etcd [38168a764cb804d1b8bc6d54a1d6e00cbeefbff0d81d5fc5c0b2d2a4dd2f974f] <==
	{"level":"warn","ts":"2024-06-12T21:23:28.559461Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-12T21:23:28.559853Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.31:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.31:2380","--initial-cluster=kubernetes-upgrade-724108=https://192.168.50.31:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.31:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.31:2380","--name=kubernetes-upgrade-724108","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot
-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-06-12T21:23:28.560486Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-06-12T21:23:28.56054Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-12T21:23:28.560554Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.31:2380"]}
	{"level":"info","ts":"2024-06-12T21:23:28.560582Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T21:23:28.562115Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.31:2379"]}
	{"level":"info","ts":"2024-06-12T21:23:28.56634Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-724108","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.31:2380"],"listen-peer-urls":["https://192.168.50.31:2380"],"advertise-client-urls":["https://192.168.50.31:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.31:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","in
itial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-06-12T21:23:28.61259Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"46.01067ms"}
	{"level":"info","ts":"2024-06-12T21:23:28.67289Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	
	
	==> etcd [637ee5f338c22718d248895d3838f3311a64fbce786391cc05fad28b1673ccba] <==
	{"level":"info","ts":"2024-06-12T21:23:46.945497Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T21:23:46.945643Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-12T21:23:46.946457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 switched to configuration voters=(17873060208107511526)"}
	{"level":"info","ts":"2024-06-12T21:23:46.946535Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"320a05faed2c1128","local-member-id":"f809dd90516adee6","added-peer-id":"f809dd90516adee6","added-peer-peer-urls":["https://192.168.50.31:2380"]}
	{"level":"info","ts":"2024-06-12T21:23:46.946713Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"320a05faed2c1128","local-member-id":"f809dd90516adee6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:23:46.946752Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:23:46.950813Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T21:23:46.950987Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.31:2380"}
	{"level":"info","ts":"2024-06-12T21:23:46.951032Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.31:2380"}
	{"level":"info","ts":"2024-06-12T21:23:46.951367Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f809dd90516adee6","initial-advertise-peer-urls":["https://192.168.50.31:2380"],"listen-peer-urls":["https://192.168.50.31:2380"],"advertise-client-urls":["https://192.168.50.31:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.31:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-12T21:23:46.951421Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-12T21:23:48.518819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-12T21:23:48.518915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-12T21:23:48.518944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 received MsgPreVoteResp from f809dd90516adee6 at term 2"}
	{"level":"info","ts":"2024-06-12T21:23:48.518968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 became candidate at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:48.518977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 received MsgVoteResp from f809dd90516adee6 at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:48.519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 became leader at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:48.519038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f809dd90516adee6 elected leader f809dd90516adee6 at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:48.525942Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f809dd90516adee6","local-member-attributes":"{Name:kubernetes-upgrade-724108 ClientURLs:[https://192.168.50.31:2379]}","request-path":"/0/members/f809dd90516adee6/attributes","cluster-id":"320a05faed2c1128","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-12T21:23:48.526021Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:23:48.527875Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-12T21:23:48.543246Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:23:48.54353Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T21:23:48.543582Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-12T21:23:48.548408Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.31:2379"}
	
	
	==> kernel <==
	 21:23:56 up 1 min,  0 users,  load average: 1.65, 0.58, 0.20
	Linux kubernetes-upgrade-724108 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [75332419696c294fd56c370fe43f18d272a90e0690a71e0648c9428d3efe56f8] <==
	I0612 21:23:28.551359       1 options.go:221] external host was not specified, using 192.168.50.31
	I0612 21:23:28.560634       1 server.go:148] Version: v1.30.1
	I0612 21:23:28.560701       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [de477c159e43d30200bc2a41d4d7e60d47c6a8481e9006b175aaeb04560adfa6] <==
	I0612 21:23:51.982013       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 21:23:52.045389       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 21:23:52.059394       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 21:23:52.059430       1 policy_source.go:224] refreshing policies
	I0612 21:23:52.081903       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 21:23:52.081956       1 aggregator.go:165] initial CRD sync complete...
	I0612 21:23:52.081965       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 21:23:52.081972       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 21:23:52.081979       1 cache.go:39] Caches are synced for autoregister controller
	I0612 21:23:52.108858       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 21:23:52.116845       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 21:23:52.117826       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 21:23:52.117858       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 21:23:52.117948       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 21:23:52.119652       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 21:23:52.122258       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 21:23:52.128109       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0612 21:23:52.128330       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0612 21:23:52.938892       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 21:23:53.083970       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 21:23:53.703739       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 21:23:53.713959       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 21:23:53.749489       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 21:23:53.883656       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 21:23:53.890273       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [6131829b63284781bcf2e27b4470daf45c0f0d4470d0c451df4a04b0513f786f] <==
	
	
	==> kube-controller-manager [b90a57f3dc8214c241b2f517676c13c62d844ecebe3e4ebf412e8b87994eaff1] <==
	I0612 21:23:54.060687       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 21:23:54.060857       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 21:23:54.060880       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 21:23:54.064135       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 21:23:54.064485       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 21:23:54.064565       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 21:23:54.067326       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 21:23:54.067631       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 21:23:54.067656       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 21:23:54.071004       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 21:23:54.071201       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 21:23:54.071225       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 21:23:54.098966       1 shared_informer.go:320] Caches are synced for tokens
	I0612 21:23:54.109531       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 21:23:54.109669       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 21:23:54.111815       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 21:23:54.111952       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 21:23:54.112006       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 21:23:54.112054       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 21:23:54.113866       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 21:23:54.114039       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 21:23:54.114066       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 21:23:54.115797       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 21:23:54.115925       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 21:23:54.115951       1 shared_informer.go:313] Waiting for caches to sync for TTL
	
	
	==> kube-proxy [9e526608624f480c35c1cb49aa760bf06c44a01b1e1c60f3f19f8f31061ba885] <==
	
	
	==> kube-proxy [f5dbc7b120727a15d828eba8ba95c3daf748ed7121ae63e3926195fc7dcbe56f] <==
	I0612 21:23:53.134462       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:23:53.160063       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.31"]
	I0612 21:23:53.236643       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:23:53.236813       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:23:53.236888       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:23:53.243210       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:23:53.243536       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:23:53.243610       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:23:53.244766       1 config.go:192] "Starting service config controller"
	I0612 21:23:53.244865       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:23:53.244958       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:23:53.245001       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:23:53.245595       1 config.go:319] "Starting node config controller"
	I0612 21:23:53.245665       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:23:53.345723       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 21:23:53.345802       1 shared_informer.go:320] Caches are synced for node config
	I0612 21:23:53.345894       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8cd5a7f3adbe9f9ce0558b2ef5a03163eb5d69628e1e4de04fc031c5bb8b0cc7] <==
	
	
	==> kube-scheduler [8cdf23fc510f606fa07784b8e92bc3d05acac6e068d15f90c21c5b7c0011be59] <==
	I0612 21:23:50.610079       1 serving.go:380] Generated self-signed cert in-memory
	W0612 21:23:52.017522       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0612 21:23:52.017566       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 21:23:52.017576       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0612 21:23:52.017581       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 21:23:52.043508       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 21:23:52.043558       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:23:52.045931       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 21:23:52.046034       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 21:23:52.046043       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 21:23:52.046063       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 21:23:52.146464       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:23:49 kubernetes-upgrade-724108 kubelet[4006]: E0612 21:23:49.079516    4006 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-724108?timeout=10s\": dial tcp 192.168.50.31:8443: connect: connection refused" interval="800ms"
	Jun 12 21:23:49 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:49.195652    4006 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-724108"
	Jun 12 21:23:49 kubernetes-upgrade-724108 kubelet[4006]: E0612 21:23:49.196947    4006 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.31:8443: connect: connection refused" node="kubernetes-upgrade-724108"
	Jun 12 21:23:49 kubernetes-upgrade-724108 kubelet[4006]: W0612 21:23:49.440856    4006 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.31:8443: connect: connection refused
	Jun 12 21:23:49 kubernetes-upgrade-724108 kubelet[4006]: E0612 21:23:49.440918    4006 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.31:8443: connect: connection refused
	Jun 12 21:23:49 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:49.998840    4006 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-724108"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.080468    4006 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-724108"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.081057    4006 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-724108"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.085375    4006 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.087511    4006 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.455661    4006 apiserver.go:52] "Watching apiserver"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.459221    4006 topology_manager.go:215] "Topology Admit Handler" podUID="1e6e8046-96c5-4ea9-9022-5b09a2617cec" podNamespace="kube-system" podName="storage-provisioner"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.459687    4006 topology_manager.go:215] "Topology Admit Handler" podUID="dc593b3b-e9cd-4cff-b9b2-8c7c7cf0db52" podNamespace="kube-system" podName="coredns-7db6d8ff4d-54l7k"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.459937    4006 topology_manager.go:215] "Topology Admit Handler" podUID="7b632542-e6e9-4ae6-828c-9299276c6ae7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vhfcz"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.460105    4006 topology_manager.go:215] "Topology Admit Handler" podUID="c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee" podNamespace="kube-system" podName="kube-proxy-ssjq6"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.474960    4006 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.508269    4006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee-lib-modules\") pod \"kube-proxy-ssjq6\" (UID: \"c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee\") " pod="kube-system/kube-proxy-ssjq6"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.508394    4006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e6e8046-96c5-4ea9-9022-5b09a2617cec-tmp\") pod \"storage-provisioner\" (UID: \"1e6e8046-96c5-4ea9-9022-5b09a2617cec\") " pod="kube-system/storage-provisioner"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.508460    4006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee-xtables-lock\") pod \"kube-proxy-ssjq6\" (UID: \"c2c1e4f6-5d0c-44fc-8c66-371b6b75f3ee\") " pod="kube-system/kube-proxy-ssjq6"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.760765    4006 scope.go:117] "RemoveContainer" containerID="9e526608624f480c35c1cb49aa760bf06c44a01b1e1c60f3f19f8f31061ba885"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.762897    4006 scope.go:117] "RemoveContainer" containerID="497bf75c64a8bca3301f17325a843584428a5930c49f49564824a4d54c2f297a"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.763244    4006 scope.go:117] "RemoveContainer" containerID="415c589670fb68dcea8f071b2cc4f3f8b23cb69d6450301f3d2bbd3dc302875f"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:52.763616    4006 scope.go:117] "RemoveContainer" containerID="06c1806d07405fc3f7c690bb6e8474ea72796eb8626d66955b2199ed27d2d3d2"
	Jun 12 21:23:52 kubernetes-upgrade-724108 kubelet[4006]: E0612 21:23:52.777079    4006 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-724108\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-724108"
	Jun 12 21:23:54 kubernetes-upgrade-724108 kubelet[4006]: I0612 21:23:54.781851    4006 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [06c1806d07405fc3f7c690bb6e8474ea72796eb8626d66955b2199ed27d2d3d2] <==
	I0612 21:23:44.835701       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0612 21:23:44.839652       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [ed32adda6c6b7be0d8f4a42bc4f8dbae192074349a50c1c1a828c6f63fe1e0ff] <==
	I0612 21:23:53.036224       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0612 21:23:53.072313       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0612 21:23:53.072385       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0612 21:23:53.092454       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0612 21:23:53.092597       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-724108_55f342ec-2c67-451d-8602-3fd731f127c1!
	I0612 21:23:53.093575       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7c2f18e-ed29-4365-8103-469bbaf8398a", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-724108_55f342ec-2c67-451d-8602-3fd731f127c1 became leader
	I0612 21:23:53.193413       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-724108_55f342ec-2c67-451d-8602-3fd731f127c1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-724108 -n kubernetes-upgrade-724108
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-724108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-724108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-724108
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-724108: (1.105554407s)
--- FAIL: TestKubernetesUpgrade (400.10s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (50.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-037058 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-037058 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.58734718s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-037058] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-037058" primary control-plane node in "pause-037058" cluster
	* Updating the running kvm2 "pause-037058" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-037058" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 21:22:58.602738   64208 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:22:58.602965   64208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:22:58.602972   64208 out.go:304] Setting ErrFile to fd 2...
	I0612 21:22:58.602976   64208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:22:58.603158   64208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:22:58.603672   64208 out.go:298] Setting JSON to false
	I0612 21:22:58.604547   64208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7524,"bootTime":1718219855,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:22:58.604602   64208 start.go:139] virtualization: kvm guest
	I0612 21:22:58.647837   64208 out.go:177] * [pause-037058] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:22:58.649758   64208 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:22:58.649708   64208 notify.go:220] Checking for updates...
	I0612 21:22:58.651350   64208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:22:58.652740   64208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:22:58.654218   64208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:22:58.655563   64208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:22:58.657016   64208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:22:58.659111   64208 config.go:182] Loaded profile config "pause-037058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:22:58.659708   64208 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:22:58.659763   64208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:22:58.675596   64208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I0612 21:22:58.676048   64208 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:22:58.676647   64208 main.go:141] libmachine: Using API Version  1
	I0612 21:22:58.676666   64208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:22:58.677051   64208 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:22:58.677264   64208 main.go:141] libmachine: (pause-037058) Calling .DriverName
	I0612 21:22:58.677514   64208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:22:58.677837   64208 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:22:58.677895   64208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:22:58.692817   64208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38401
	I0612 21:22:58.693228   64208 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:22:58.693754   64208 main.go:141] libmachine: Using API Version  1
	I0612 21:22:58.693780   64208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:22:58.694124   64208 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:22:58.694342   64208 main.go:141] libmachine: (pause-037058) Calling .DriverName
	I0612 21:22:58.733155   64208 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:22:58.734658   64208 start.go:297] selected driver: kvm2
	I0612 21:22:58.734678   64208 start.go:901] validating driver "kvm2" against &{Name:pause-037058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.1 ClusterName:pause-037058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:22:58.734873   64208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:22:58.735324   64208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:22:58.735412   64208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:22:58.750831   64208 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:22:58.751543   64208 cni.go:84] Creating CNI manager for ""
	I0612 21:22:58.751559   64208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:22:58.751618   64208 start.go:340] cluster config:
	{Name:pause-037058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:pause-037058 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:22:58.751730   64208 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:22:58.753745   64208 out.go:177] * Starting "pause-037058" primary control-plane node in "pause-037058" cluster
	I0612 21:22:58.755120   64208 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:22:58.755153   64208 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 21:22:58.755168   64208 cache.go:56] Caching tarball of preloaded images
	I0612 21:22:58.755258   64208 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:22:58.755268   64208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 21:22:58.755423   64208 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/pause-037058/config.json ...
	I0612 21:22:58.755625   64208 start.go:360] acquireMachinesLock for pause-037058: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:23:08.155793   64208 start.go:364] duration metric: took 9.400126052s to acquireMachinesLock for "pause-037058"
	I0612 21:23:08.155865   64208 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:23:08.155890   64208 fix.go:54] fixHost starting: 
	I0612 21:23:08.156293   64208 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:08.156343   64208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:08.176359   64208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I0612 21:23:08.176855   64208 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:08.177485   64208 main.go:141] libmachine: Using API Version  1
	I0612 21:23:08.177510   64208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:08.177889   64208 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:08.178131   64208 main.go:141] libmachine: (pause-037058) Calling .DriverName
	I0612 21:23:08.178319   64208 main.go:141] libmachine: (pause-037058) Calling .GetState
	I0612 21:23:08.180171   64208 fix.go:112] recreateIfNeeded on pause-037058: state=Running err=<nil>
	W0612 21:23:08.180192   64208 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:23:08.182358   64208 out.go:177] * Updating the running kvm2 "pause-037058" VM ...
	I0612 21:23:08.183573   64208 machine.go:94] provisionDockerMachine start ...
	I0612 21:23:08.183598   64208 main.go:141] libmachine: (pause-037058) Calling .DriverName
	I0612 21:23:08.183769   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHHostname
	I0612 21:23:08.186212   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.186701   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:08.186726   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.186873   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHPort
	I0612 21:23:08.187082   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:08.187296   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:08.187458   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHUsername
	I0612 21:23:08.187652   64208 main.go:141] libmachine: Using SSH client type: native
	I0612 21:23:08.187860   64208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0612 21:23:08.187874   64208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:23:08.295544   64208 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-037058
	
	I0612 21:23:08.295581   64208 main.go:141] libmachine: (pause-037058) Calling .GetMachineName
	I0612 21:23:08.295849   64208 buildroot.go:166] provisioning hostname "pause-037058"
	I0612 21:23:08.295874   64208 main.go:141] libmachine: (pause-037058) Calling .GetMachineName
	I0612 21:23:08.296055   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHHostname
	I0612 21:23:08.298582   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.298957   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:08.298986   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.299092   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHPort
	I0612 21:23:08.299286   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:08.299455   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:08.299601   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHUsername
	I0612 21:23:08.299755   64208 main.go:141] libmachine: Using SSH client type: native
	I0612 21:23:08.299957   64208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0612 21:23:08.299970   64208 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-037058 && echo "pause-037058" | sudo tee /etc/hostname
	I0612 21:23:08.420729   64208 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-037058
	
	I0612 21:23:08.420759   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHHostname
	I0612 21:23:08.423693   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.424062   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:08.424092   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.424269   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHPort
	I0612 21:23:08.424482   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:08.424659   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:08.424869   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHUsername
	I0612 21:23:08.425068   64208 main.go:141] libmachine: Using SSH client type: native
	I0612 21:23:08.425222   64208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0612 21:23:08.425237   64208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-037058' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-037058/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-037058' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:23:08.532190   64208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:23:08.532222   64208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:23:08.532252   64208 buildroot.go:174] setting up certificates
	I0612 21:23:08.532260   64208 provision.go:84] configureAuth start
	I0612 21:23:08.532269   64208 main.go:141] libmachine: (pause-037058) Calling .GetMachineName
	I0612 21:23:08.532548   64208 main.go:141] libmachine: (pause-037058) Calling .GetIP
	I0612 21:23:08.535129   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.535629   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:08.535657   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.535785   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHHostname
	I0612 21:23:08.538097   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.538467   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:08.538501   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.538669   64208 provision.go:143] copyHostCerts
	I0612 21:23:08.538731   64208 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:23:08.538744   64208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:23:08.538810   64208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:23:08.538944   64208 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:23:08.538953   64208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:23:08.538976   64208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:23:08.539065   64208 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:23:08.539075   64208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:23:08.539101   64208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:23:08.539190   64208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.pause-037058 san=[127.0.0.1 192.168.61.183 localhost minikube pause-037058]
	I0612 21:23:08.730362   64208 provision.go:177] copyRemoteCerts
	I0612 21:23:08.730417   64208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:23:08.730441   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHHostname
	I0612 21:23:08.732974   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.733272   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:08.733310   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.733477   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHPort
	I0612 21:23:08.733674   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:08.733829   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHUsername
	I0612 21:23:08.733995   64208 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/pause-037058/id_rsa Username:docker}
	I0612 21:23:08.818456   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:23:08.846823   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0612 21:23:08.873120   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:23:08.899968   64208 provision.go:87] duration metric: took 367.696979ms to configureAuth
	I0612 21:23:08.900001   64208 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:23:08.900224   64208 config.go:182] Loaded profile config "pause-037058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:23:08.900317   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHHostname
	I0612 21:23:08.903124   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.903489   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:08.903517   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:08.903667   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHPort
	I0612 21:23:08.903859   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:08.904042   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:08.904198   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHUsername
	I0612 21:23:08.904364   64208 main.go:141] libmachine: Using SSH client type: native
	I0612 21:23:08.904578   64208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0612 21:23:08.904600   64208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:23:14.596341   64208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:23:14.596383   64208 machine.go:97] duration metric: took 6.412790866s to provisionDockerMachine
	I0612 21:23:14.596399   64208 start.go:293] postStartSetup for "pause-037058" (driver="kvm2")
	I0612 21:23:14.596411   64208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:23:14.596433   64208 main.go:141] libmachine: (pause-037058) Calling .DriverName
	I0612 21:23:14.596903   64208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:23:14.596935   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHHostname
	I0612 21:23:14.600753   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:14.601192   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:14.601238   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:14.601518   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHPort
	I0612 21:23:14.601710   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:14.601889   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHUsername
	I0612 21:23:14.602091   64208 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/pause-037058/id_rsa Username:docker}
	I0612 21:23:14.692312   64208 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:23:14.697138   64208 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:23:14.697168   64208 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:23:14.697236   64208 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:23:14.697327   64208 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:23:14.697447   64208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:23:14.708575   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:23:14.737950   64208 start.go:296] duration metric: took 141.536657ms for postStartSetup
	I0612 21:23:14.737994   64208 fix.go:56] duration metric: took 6.582120686s for fixHost
	I0612 21:23:14.738017   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHHostname
	I0612 21:23:14.741230   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:14.741564   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:14.741598   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:14.741771   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHPort
	I0612 21:23:14.741952   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:14.742148   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:14.742341   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHUsername
	I0612 21:23:14.742521   64208 main.go:141] libmachine: Using SSH client type: native
	I0612 21:23:14.742720   64208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0612 21:23:14.742728   64208 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0612 21:23:14.865218   64208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718227394.856717974
	
	I0612 21:23:14.865244   64208 fix.go:216] guest clock: 1718227394.856717974
	I0612 21:23:14.865253   64208 fix.go:229] Guest: 2024-06-12 21:23:14.856717974 +0000 UTC Remote: 2024-06-12 21:23:14.737998126 +0000 UTC m=+16.172365556 (delta=118.719848ms)
	I0612 21:23:14.865277   64208 fix.go:200] guest clock delta is within tolerance: 118.719848ms
	I0612 21:23:14.865304   64208 start.go:83] releasing machines lock for "pause-037058", held for 6.709452443s
	I0612 21:23:14.865336   64208 main.go:141] libmachine: (pause-037058) Calling .DriverName
	I0612 21:23:14.865620   64208 main.go:141] libmachine: (pause-037058) Calling .GetIP
	I0612 21:23:14.868563   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:14.869063   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:14.869091   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:14.869335   64208 main.go:141] libmachine: (pause-037058) Calling .DriverName
	I0612 21:23:14.870051   64208 main.go:141] libmachine: (pause-037058) Calling .DriverName
	I0612 21:23:14.870268   64208 main.go:141] libmachine: (pause-037058) Calling .DriverName
	I0612 21:23:14.870379   64208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:23:14.870425   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHHostname
	I0612 21:23:14.870838   64208 ssh_runner.go:195] Run: cat /version.json
	I0612 21:23:14.870897   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHHostname
	I0612 21:23:14.874284   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:14.874737   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:14.874760   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:14.875056   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:14.875411   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHPort
	I0612 21:23:14.875593   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:14.875686   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:14.875706   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:14.875975   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHPort
	I0612 21:23:14.875993   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHUsername
	I0612 21:23:14.876141   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHKeyPath
	I0612 21:23:14.876250   64208 main.go:141] libmachine: (pause-037058) Calling .GetSSHUsername
	I0612 21:23:14.876350   64208 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/pause-037058/id_rsa Username:docker}
	I0612 21:23:14.876439   64208 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/pause-037058/id_rsa Username:docker}
	I0612 21:23:14.983308   64208 ssh_runner.go:195] Run: systemctl --version
	I0612 21:23:14.992556   64208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:23:15.168260   64208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:23:15.176812   64208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:23:15.176890   64208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:23:15.188033   64208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0612 21:23:15.188100   64208 start.go:494] detecting cgroup driver to use...
	I0612 21:23:15.188195   64208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:23:15.209608   64208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:23:15.230438   64208 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:23:15.230509   64208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:23:15.249730   64208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:23:15.269799   64208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:23:15.450964   64208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:23:15.635410   64208 docker.go:233] disabling docker service ...
	I0612 21:23:15.635501   64208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:23:15.697471   64208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:23:15.764803   64208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:23:16.081137   64208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:23:16.478812   64208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:23:16.546390   64208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:23:16.674597   64208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:23:16.674670   64208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:23:16.712491   64208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:23:16.712578   64208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:23:16.743300   64208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:23:16.784300   64208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:23:16.880940   64208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:23:16.912286   64208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:23:16.941745   64208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:23:16.968303   64208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:23:17.005167   64208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:23:17.024192   64208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:23:17.045465   64208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:23:17.297207   64208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:23:20.578542   64208 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.281230117s)
	I0612 21:23:20.578580   64208 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:23:20.578639   64208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:23:20.584834   64208 start.go:562] Will wait 60s for crictl version
	I0612 21:23:20.584898   64208 ssh_runner.go:195] Run: which crictl
	I0612 21:23:20.589165   64208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:23:20.634671   64208 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:23:20.634763   64208 ssh_runner.go:195] Run: crio --version
	I0612 21:23:20.664827   64208 ssh_runner.go:195] Run: crio --version
	I0612 21:23:20.695872   64208 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:23:20.697505   64208 main.go:141] libmachine: (pause-037058) Calling .GetIP
	I0612 21:23:20.700078   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:20.700461   64208 main.go:141] libmachine: (pause-037058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3e:bd", ip: ""} in network mk-pause-037058: {Iface:virbr3 ExpiryTime:2024-06-12 22:22:14 +0000 UTC Type:0 Mac:52:54:00:ec:3e:bd Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:pause-037058 Clientid:01:52:54:00:ec:3e:bd}
	I0612 21:23:20.700484   64208 main.go:141] libmachine: (pause-037058) DBG | domain pause-037058 has defined IP address 192.168.61.183 and MAC address 52:54:00:ec:3e:bd in network mk-pause-037058
	I0612 21:23:20.700755   64208 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0612 21:23:20.705940   64208 kubeadm.go:877] updating cluster {Name:pause-037058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:pause-037058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:23:20.706070   64208 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:23:20.706134   64208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:23:20.743497   64208 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:23:20.743518   64208 crio.go:433] Images already preloaded, skipping extraction
	I0612 21:23:20.743564   64208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:23:20.829826   64208 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:23:20.829852   64208 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:23:20.829860   64208 kubeadm.go:928] updating node { 192.168.61.183 8443 v1.30.1 crio true true} ...
	I0612 21:23:20.829970   64208 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-037058 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:pause-037058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:23:20.830031   64208 ssh_runner.go:195] Run: crio config
	I0612 21:23:21.076854   64208 cni.go:84] Creating CNI manager for ""
	I0612 21:23:21.076879   64208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:23:21.076892   64208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:23:21.076920   64208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.183 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-037058 NodeName:pause-037058 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:23:21.077085   64208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-037058"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:23:21.077156   64208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:23:21.124776   64208 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:23:21.124903   64208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:23:21.177180   64208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0612 21:23:21.251813   64208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:23:21.311772   64208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0612 21:23:21.337133   64208 ssh_runner.go:195] Run: grep 192.168.61.183	control-plane.minikube.internal$ /etc/hosts
	I0612 21:23:21.344395   64208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:23:21.502193   64208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:23:21.516584   64208 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/pause-037058 for IP: 192.168.61.183
	I0612 21:23:21.516604   64208 certs.go:194] generating shared ca certs ...
	I0612 21:23:21.516622   64208 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:23:21.516773   64208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:23:21.516826   64208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:23:21.516840   64208 certs.go:256] generating profile certs ...
	I0612 21:23:21.516934   64208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/pause-037058/client.key
	I0612 21:23:21.517014   64208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/pause-037058/apiserver.key.4acd59c2
	I0612 21:23:21.517072   64208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/pause-037058/proxy-client.key
	I0612 21:23:21.517231   64208 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:23:21.517277   64208 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:23:21.517287   64208 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:23:21.517309   64208 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:23:21.517338   64208 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:23:21.517361   64208 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:23:21.517397   64208 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:23:21.517985   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:23:21.552115   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:23:21.580871   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:23:21.605649   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:23:21.633490   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/pause-037058/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0612 21:23:21.658187   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/pause-037058/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:23:21.682661   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/pause-037058/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:23:21.708721   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/pause-037058/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:23:21.735001   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:23:21.760527   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:23:21.785418   64208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:23:21.810660   64208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:23:21.827430   64208 ssh_runner.go:195] Run: openssl version
	I0612 21:23:21.833588   64208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:23:21.844346   64208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:23:21.848800   64208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:23:21.848852   64208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:23:21.854673   64208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:23:21.864386   64208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:23:21.876033   64208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:23:21.880541   64208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:23:21.880587   64208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:23:21.890159   64208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:23:21.928201   64208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:23:21.939821   64208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:23:21.944442   64208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:23:21.944505   64208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:23:21.950239   64208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:23:21.959847   64208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:23:21.965158   64208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:23:21.971336   64208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:23:21.977035   64208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:23:21.984600   64208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:23:21.992105   64208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:23:21.999987   64208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:23:22.006011   64208 kubeadm.go:391] StartCluster: {Name:pause-037058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:pause-037058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:23:22.006171   64208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:23:22.006275   64208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:23:22.053849   64208 cri.go:89] found id: "4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541"
	I0612 21:23:22.053871   64208 cri.go:89] found id: "b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6"
	I0612 21:23:22.053877   64208 cri.go:89] found id: "cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba"
	I0612 21:23:22.053882   64208 cri.go:89] found id: "50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408"
	I0612 21:23:22.053885   64208 cri.go:89] found id: "544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463"
	I0612 21:23:22.053889   64208 cri.go:89] found id: "008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a"
	I0612 21:23:22.053892   64208 cri.go:89] found id: ""
	I0612 21:23:22.053947   64208 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-037058 -n pause-037058
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-037058 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-037058 logs -n 25: (1.426433686s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-719458             | running-upgrade-719458    | jenkins | v1.33.1 | 12 Jun 24 21:19 UTC | 12 Jun 24 21:21 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-436071           | force-systemd-env-436071  | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC | 12 Jun 24 21:20 UTC |
	| start   | -p force-systemd-flag-732641          | force-systemd-flag-732641 | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC | 12 Jun 24 21:21 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-721096 sudo           | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-721096                | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC | 12 Jun 24 21:20 UTC |
	| start   | -p NoKubernetes-721096                | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC | 12 Jun 24 21:21 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-732641 ssh cat     | force-systemd-flag-732641 | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-732641          | force-systemd-flag-732641 | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	| start   | -p cert-expiration-112791             | cert-expiration-112791    | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-721096 sudo           | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-721096                | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	| start   | -p cert-options-449240                | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:22 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-719458             | running-upgrade-719458    | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	| start   | -p pause-037058 --memory=2048         | pause-037058              | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:22 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	| start   | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:23 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-449240 ssh               | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-449240 -- sudo        | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-449240                | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	| start   | -p stopped-upgrade-776864             | minikube                  | jenkins | v1.26.0 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:23 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p pause-037058                       | pause-037058              | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:23 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:23 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:23 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-776864 stop           | minikube                  | jenkins | v1.26.0 | 12 Jun 24 21:23 UTC | 12 Jun 24 21:23 UTC |
	| start   | -p stopped-upgrade-776864             | stopped-upgrade-776864    | jenkins | v1.33.1 | 12 Jun 24 21:23 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:23:39
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:23:39.686195   64624 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:23:39.686318   64624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:23:39.686330   64624 out.go:304] Setting ErrFile to fd 2...
	I0612 21:23:39.686337   64624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:23:39.686569   64624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:23:39.687073   64624 out.go:298] Setting JSON to false
	I0612 21:23:39.688054   64624 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7565,"bootTime":1718219855,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:23:39.688115   64624 start.go:139] virtualization: kvm guest
	I0612 21:23:39.691312   64624 out.go:177] * [stopped-upgrade-776864] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:23:39.692759   64624 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:23:39.692774   64624 notify.go:220] Checking for updates...
	I0612 21:23:39.694032   64624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:23:39.695295   64624 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:23:39.696523   64624 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:23:39.697745   64624 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:23:39.699100   64624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:23:39.701001   64624 config.go:182] Loaded profile config "stopped-upgrade-776864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0612 21:23:39.701575   64624 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:39.701628   64624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:39.718499   64624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34999
	I0612 21:23:39.718903   64624 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:39.719434   64624 main.go:141] libmachine: Using API Version  1
	I0612 21:23:39.719456   64624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:39.719827   64624 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:39.719997   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .DriverName
	I0612 21:23:39.721930   64624 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0612 21:23:39.723165   64624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:23:39.723481   64624 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:39.723527   64624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:39.738187   64624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I0612 21:23:39.738620   64624 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:39.739037   64624 main.go:141] libmachine: Using API Version  1
	I0612 21:23:39.739067   64624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:39.739348   64624 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:39.739561   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .DriverName
	I0612 21:23:39.776516   64624 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:23:39.777854   64624 start.go:297] selected driver: kvm2
	I0612 21:23:39.777865   64624 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-776864 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-776
864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0612 21:23:39.777968   64624 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:23:39.778652   64624 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:23:39.778721   64624 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:23:39.793894   64624 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:23:39.794253   64624 cni.go:84] Creating CNI manager for ""
	I0612 21:23:39.794283   64624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:23:39.794368   64624 start.go:340] cluster config:
	{Name:stopped-upgrade-776864 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-776864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0612 21:23:39.794498   64624 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:23:39.797220   64624 out.go:177] * Starting "stopped-upgrade-776864" primary control-plane node in "stopped-upgrade-776864" cluster
	I0612 21:23:39.798220   64624 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0612 21:23:39.798264   64624 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0612 21:23:39.798271   64624 cache.go:56] Caching tarball of preloaded images
	I0612 21:23:39.798352   64624 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:23:39.798363   64624 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0612 21:23:39.798450   64624 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/stopped-upgrade-776864/config.json ...
	I0612 21:23:39.798625   64624 start.go:360] acquireMachinesLock for stopped-upgrade-776864: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:23:39.798676   64624 start.go:364] duration metric: took 31.701µs to acquireMachinesLock for "stopped-upgrade-776864"
	I0612 21:23:39.798695   64624 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:23:39.798704   64624 fix.go:54] fixHost starting: 
	I0612 21:23:39.799015   64624 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:39.799050   64624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:39.814100   64624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
	I0612 21:23:39.814502   64624 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:39.814985   64624 main.go:141] libmachine: Using API Version  1
	I0612 21:23:39.815008   64624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:39.815355   64624 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:39.815566   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .DriverName
	I0612 21:23:39.815727   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .GetState
	I0612 21:23:39.817330   64624 fix.go:112] recreateIfNeeded on stopped-upgrade-776864: state=Stopped err=<nil>
	I0612 21:23:39.817350   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .DriverName
	W0612 21:23:39.817505   64624 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:23:39.819471   64624 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-776864" ...
	I0612 21:23:39.476435   64208 pod_ready.go:102] pod "etcd-pause-037058" in "kube-system" namespace has status "Ready":"False"
	I0612 21:23:41.476775   64208 pod_ready.go:92] pod "etcd-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:41.476797   64208 pod_ready.go:81] duration metric: took 11.007930368s for pod "etcd-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.476806   64208 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.482289   64208 pod_ready.go:92] pod "kube-apiserver-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:41.482314   64208 pod_ready.go:81] duration metric: took 5.500075ms for pod "kube-apiserver-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.482326   64208 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.495009   64208 pod_ready.go:92] pod "kube-controller-manager-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:41.495038   64208 pod_ready.go:81] duration metric: took 12.703888ms for pod "kube-controller-manager-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.495056   64208 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-scm6r" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.502117   64208 pod_ready.go:92] pod "kube-proxy-scm6r" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:41.502141   64208 pod_ready.go:81] duration metric: took 7.077421ms for pod "kube-proxy-scm6r" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.502152   64208 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.507543   64208 pod_ready.go:92] pod "kube-scheduler-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:41.507562   64208 pod_ready.go:81] duration metric: took 5.403527ms for pod "kube-scheduler-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.507569   64208 pod_ready.go:38] duration metric: took 11.557312253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:23:41.507584   64208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:23:41.525158   64208 ops.go:34] apiserver oom_adj: -16
	I0612 21:23:41.525182   64208 kubeadm.go:591] duration metric: took 19.418374734s to restartPrimaryControlPlane
	I0612 21:23:41.525193   64208 kubeadm.go:393] duration metric: took 19.519187283s to StartCluster
	I0612 21:23:41.525213   64208 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:23:41.525331   64208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:23:41.526498   64208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:23:41.526771   64208 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:23:41.528381   64208 out.go:177] * Verifying Kubernetes components...
	I0612 21:23:41.526894   64208 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:23:41.527013   64208 config.go:182] Loaded profile config "pause-037058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:23:41.529627   64208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:23:41.531715   64208 out.go:177] * Enabled addons: 
	I0612 21:23:41.532954   64208 addons.go:510] duration metric: took 6.065504ms for enable addons: enabled=[]
	I0612 21:23:41.737973   64208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:23:41.757312   64208 node_ready.go:35] waiting up to 6m0s for node "pause-037058" to be "Ready" ...
	I0612 21:23:41.761140   64208 node_ready.go:49] node "pause-037058" has status "Ready":"True"
	I0612 21:23:41.761161   64208 node_ready.go:38] duration metric: took 3.814372ms for node "pause-037058" to be "Ready" ...
	I0612 21:23:41.761168   64208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:23:41.875186   64208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2kgfl" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:42.273273   64208 pod_ready.go:92] pod "coredns-7db6d8ff4d-2kgfl" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:42.273314   64208 pod_ready.go:81] duration metric: took 398.09853ms for pod "coredns-7db6d8ff4d-2kgfl" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:42.273328   64208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:42.674105   64208 pod_ready.go:92] pod "etcd-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:42.674135   64208 pod_ready.go:81] duration metric: took 400.79952ms for pod "etcd-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:42.674149   64208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:43.072653   64208 pod_ready.go:92] pod "kube-apiserver-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:43.072680   64208 pod_ready.go:81] duration metric: took 398.523ms for pod "kube-apiserver-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:43.072695   64208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:43.472919   64208 pod_ready.go:92] pod "kube-controller-manager-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:43.472946   64208 pod_ready.go:81] duration metric: took 400.2425ms for pod "kube-controller-manager-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:43.472959   64208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-scm6r" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:39.820772   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .Start
	I0612 21:23:39.820949   64624 main.go:141] libmachine: (stopped-upgrade-776864) Ensuring networks are active...
	I0612 21:23:39.821742   64624 main.go:141] libmachine: (stopped-upgrade-776864) Ensuring network default is active
	I0612 21:23:39.822056   64624 main.go:141] libmachine: (stopped-upgrade-776864) Ensuring network mk-stopped-upgrade-776864 is active
	I0612 21:23:39.822365   64624 main.go:141] libmachine: (stopped-upgrade-776864) Getting domain xml...
	I0612 21:23:39.823026   64624 main.go:141] libmachine: (stopped-upgrade-776864) Creating domain...
	I0612 21:23:41.081774   64624 main.go:141] libmachine: (stopped-upgrade-776864) Waiting to get IP...
	I0612 21:23:41.082585   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:41.082979   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:41.083052   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:41.082968   64659 retry.go:31] will retry after 240.481967ms: waiting for machine to come up
	I0612 21:23:41.325611   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:41.326140   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:41.326167   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:41.326099   64659 retry.go:31] will retry after 308.643373ms: waiting for machine to come up
	I0612 21:23:41.636945   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:41.637531   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:41.637557   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:41.637474   64659 retry.go:31] will retry after 438.420138ms: waiting for machine to come up
	I0612 21:23:42.076976   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:42.077534   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:42.077559   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:42.077498   64659 retry.go:31] will retry after 541.198513ms: waiting for machine to come up
	I0612 21:23:42.620200   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:42.620745   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:42.620775   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:42.620690   64659 retry.go:31] will retry after 461.764015ms: waiting for machine to come up
	I0612 21:23:43.084037   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:43.084539   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:43.084566   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:43.084499   64659 retry.go:31] will retry after 795.810621ms: waiting for machine to come up
	I0612 21:23:43.881411   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:43.881867   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:43.881895   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:43.881812   64659 retry.go:31] will retry after 967.517152ms: waiting for machine to come up
	I0612 21:23:43.873270   64208 pod_ready.go:92] pod "kube-proxy-scm6r" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:43.873304   64208 pod_ready.go:81] duration metric: took 400.336886ms for pod "kube-proxy-scm6r" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:43.873319   64208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:44.273307   64208 pod_ready.go:92] pod "kube-scheduler-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:44.273334   64208 pod_ready.go:81] duration metric: took 400.006748ms for pod "kube-scheduler-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:44.273341   64208 pod_ready.go:38] duration metric: took 2.512163803s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:23:44.273357   64208 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:23:44.273405   64208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:23:44.288217   64208 api_server.go:72] duration metric: took 2.761402136s to wait for apiserver process to appear ...
	I0612 21:23:44.288251   64208 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:23:44.288277   64208 api_server.go:253] Checking apiserver healthz at https://192.168.61.183:8443/healthz ...
	I0612 21:23:44.296224   64208 api_server.go:279] https://192.168.61.183:8443/healthz returned 200:
	ok
	I0612 21:23:44.297310   64208 api_server.go:141] control plane version: v1.30.1
	I0612 21:23:44.297337   64208 api_server.go:131] duration metric: took 9.078219ms to wait for apiserver health ...
	I0612 21:23:44.297348   64208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:23:44.475296   64208 system_pods.go:59] 6 kube-system pods found
	I0612 21:23:44.475326   64208 system_pods.go:61] "coredns-7db6d8ff4d-2kgfl" [9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc] Running
	I0612 21:23:44.475330   64208 system_pods.go:61] "etcd-pause-037058" [9d83914c-7387-4b15-bc28-3f1f8e9f6254] Running
	I0612 21:23:44.475339   64208 system_pods.go:61] "kube-apiserver-pause-037058" [0c8b0c81-a37d-4759-83f5-74f0fa0e0830] Running
	I0612 21:23:44.475343   64208 system_pods.go:61] "kube-controller-manager-pause-037058" [96c76d53-eec8-480b-bfa0-8d8170424d0f] Running
	I0612 21:23:44.475346   64208 system_pods.go:61] "kube-proxy-scm6r" [3366c4e4-7aae-4051-97b5-f0544c6dfe66] Running
	I0612 21:23:44.475349   64208 system_pods.go:61] "kube-scheduler-pause-037058" [3be565c9-d28b-41f4-b9c0-5af58beb72ad] Running
	I0612 21:23:44.475355   64208 system_pods.go:74] duration metric: took 178.001054ms to wait for pod list to return data ...
	I0612 21:23:44.475361   64208 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:23:44.673667   64208 default_sa.go:45] found service account: "default"
	I0612 21:23:44.673699   64208 default_sa.go:55] duration metric: took 198.330595ms for default service account to be created ...
	I0612 21:23:44.673711   64208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:23:44.875981   64208 system_pods.go:86] 6 kube-system pods found
	I0612 21:23:44.876011   64208 system_pods.go:89] "coredns-7db6d8ff4d-2kgfl" [9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc] Running
	I0612 21:23:44.876017   64208 system_pods.go:89] "etcd-pause-037058" [9d83914c-7387-4b15-bc28-3f1f8e9f6254] Running
	I0612 21:23:44.876021   64208 system_pods.go:89] "kube-apiserver-pause-037058" [0c8b0c81-a37d-4759-83f5-74f0fa0e0830] Running
	I0612 21:23:44.876025   64208 system_pods.go:89] "kube-controller-manager-pause-037058" [96c76d53-eec8-480b-bfa0-8d8170424d0f] Running
	I0612 21:23:44.876028   64208 system_pods.go:89] "kube-proxy-scm6r" [3366c4e4-7aae-4051-97b5-f0544c6dfe66] Running
	I0612 21:23:44.876032   64208 system_pods.go:89] "kube-scheduler-pause-037058" [3be565c9-d28b-41f4-b9c0-5af58beb72ad] Running
	I0612 21:23:44.876040   64208 system_pods.go:126] duration metric: took 202.322901ms to wait for k8s-apps to be running ...
	I0612 21:23:44.876047   64208 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:23:44.876106   64208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:23:44.894305   64208 system_svc.go:56] duration metric: took 18.249302ms WaitForService to wait for kubelet
	I0612 21:23:44.894335   64208 kubeadm.go:576] duration metric: took 3.367533301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:23:44.894354   64208 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:23:45.073494   64208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:23:45.073520   64208 node_conditions.go:123] node cpu capacity is 2
	I0612 21:23:45.073530   64208 node_conditions.go:105] duration metric: took 179.171788ms to run NodePressure ...
	I0612 21:23:45.073540   64208 start.go:240] waiting for startup goroutines ...
	I0612 21:23:45.073546   64208 start.go:245] waiting for cluster config update ...
	I0612 21:23:45.073553   64208 start.go:254] writing updated cluster config ...
	I0612 21:23:45.073812   64208 ssh_runner.go:195] Run: rm -f paused
	I0612 21:23:45.131164   64208 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:23:45.133572   64208 out.go:177] * Done! kubectl is now configured to use "pause-037058" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.817149011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227425817122696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3b59456-2615-4f27-aa51-d47923723e78 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.818026729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58b612a6-aad8-4595-a541-2e254bcf2aff name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.818113781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58b612a6-aad8-4595-a541-2e254bcf2aff name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.818421119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb0de71317f3390209a54c48373bb8b2807098ffde057c5185c376cb7ff994b,PodSandboxId:919def9aca75a6d69338455d2b30ad963305362e7d7141adb5977201f513b7ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227408761000261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba87272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6086205349bb9a604254ea7b1e3281b8da5477c042ff4bac6ae5d77498c12,PodSandboxId:9038010f6c043cb6fd6ee683a9de4447569c5499870e0391863b57fdfca84369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227408732573466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355f6b69042f695c626d23527932fbdf663915c5565fcd11085541facedc04a4,PodSandboxId:a143413ada3e631ad6d4e891bc5808a3abd65bb87d6a2c94801dc7c0b3b28781,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718227403919599031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed37f693bb8d6c8d41ce1afd6622548ac8432eaeab0e92e327d9b3c6dc94a239,PodSandboxId:f74141f9f9a70597af8f6e2aea2d137b1cd344e53ec42c2482971e65af18c28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227403898749980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b54e5fc600aecfcda8937c3d5d97e091fdc2ea412252c0646060c2a7caddb,PodSandboxId:695981f57fbc41789dd514a73a69de627bcc012ebc25b8dfff2919b19d927491,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227403923084670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d47884a40251484da885cf97fdc5638aede508a2a170b1694ddcf5ddc2739,PodSandboxId:92f1c525698b21333d3fb84231a2a253dd4b4948bd3b82b7254bc4ff84c6d3ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718227403908279050,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io
.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541,PodSandboxId:967bc61836b86397f64eb984ae919bc26f3006115c179f8402193a817ea9ef80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227396974936993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba8
7272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463,PodSandboxId:a65d17b2d1768a5bab749f25c547d0d8accc5683079b3b037824550703d9b289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227396319132902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6,PodSandboxId:2f6bcb9dcd6989244c1b9096e05350163cd52e14b75bf59753a6aadf30cbd88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227396509516663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba,PodSandboxId:402f7d2555c853141d70a9a1535a4fd3740dad93f3f446b36faa143c9b2b8721,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227396354798603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408,PodSandboxId:0fdae0b8583cd662a0add65b324aa56542781882b028101fa7b0bcba2168c7f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227396328367033,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a,PodSandboxId:deec78588d01221ef82f2e8921dcd3ddb32fe4955c2febd9e50a111949569834,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227396207660071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58b612a6-aad8-4595-a541-2e254bcf2aff name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.867585114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4cb8e916-df9e-43d0-8526-06c97ae313e2 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.867673911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4cb8e916-df9e-43d0-8526-06c97ae313e2 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.868795423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b908be11-8a49-49af-8412-c04dfa59777c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.869232156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227425869209193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b908be11-8a49-49af-8412-c04dfa59777c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.869984047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a88ebd05-62a6-43cf-be32-0f9f2c10dbfa name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.870057209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a88ebd05-62a6-43cf-be32-0f9f2c10dbfa name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.870309766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb0de71317f3390209a54c48373bb8b2807098ffde057c5185c376cb7ff994b,PodSandboxId:919def9aca75a6d69338455d2b30ad963305362e7d7141adb5977201f513b7ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227408761000261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba87272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6086205349bb9a604254ea7b1e3281b8da5477c042ff4bac6ae5d77498c12,PodSandboxId:9038010f6c043cb6fd6ee683a9de4447569c5499870e0391863b57fdfca84369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227408732573466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355f6b69042f695c626d23527932fbdf663915c5565fcd11085541facedc04a4,PodSandboxId:a143413ada3e631ad6d4e891bc5808a3abd65bb87d6a2c94801dc7c0b3b28781,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718227403919599031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed37f693bb8d6c8d41ce1afd6622548ac8432eaeab0e92e327d9b3c6dc94a239,PodSandboxId:f74141f9f9a70597af8f6e2aea2d137b1cd344e53ec42c2482971e65af18c28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227403898749980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b54e5fc600aecfcda8937c3d5d97e091fdc2ea412252c0646060c2a7caddb,PodSandboxId:695981f57fbc41789dd514a73a69de627bcc012ebc25b8dfff2919b19d927491,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227403923084670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d47884a40251484da885cf97fdc5638aede508a2a170b1694ddcf5ddc2739,PodSandboxId:92f1c525698b21333d3fb84231a2a253dd4b4948bd3b82b7254bc4ff84c6d3ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718227403908279050,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io
.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541,PodSandboxId:967bc61836b86397f64eb984ae919bc26f3006115c179f8402193a817ea9ef80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227396974936993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba8
7272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463,PodSandboxId:a65d17b2d1768a5bab749f25c547d0d8accc5683079b3b037824550703d9b289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227396319132902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6,PodSandboxId:2f6bcb9dcd6989244c1b9096e05350163cd52e14b75bf59753a6aadf30cbd88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227396509516663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba,PodSandboxId:402f7d2555c853141d70a9a1535a4fd3740dad93f3f446b36faa143c9b2b8721,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227396354798603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408,PodSandboxId:0fdae0b8583cd662a0add65b324aa56542781882b028101fa7b0bcba2168c7f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227396328367033,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a,PodSandboxId:deec78588d01221ef82f2e8921dcd3ddb32fe4955c2febd9e50a111949569834,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227396207660071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a88ebd05-62a6-43cf-be32-0f9f2c10dbfa name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.916125548Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0fa1756-11c4-46d3-9b7c-888159c827c9 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.916257176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0fa1756-11c4-46d3-9b7c-888159c827c9 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.918112234Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99a29c01-b587-4807-ae4f-1dbc24159612 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.918674051Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227425918637762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99a29c01-b587-4807-ae4f-1dbc24159612 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.919569095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac56dafd-1da3-473d-b0b9-960e99b9eae4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.919622162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac56dafd-1da3-473d-b0b9-960e99b9eae4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.920070399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb0de71317f3390209a54c48373bb8b2807098ffde057c5185c376cb7ff994b,PodSandboxId:919def9aca75a6d69338455d2b30ad963305362e7d7141adb5977201f513b7ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227408761000261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba87272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6086205349bb9a604254ea7b1e3281b8da5477c042ff4bac6ae5d77498c12,PodSandboxId:9038010f6c043cb6fd6ee683a9de4447569c5499870e0391863b57fdfca84369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227408732573466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355f6b69042f695c626d23527932fbdf663915c5565fcd11085541facedc04a4,PodSandboxId:a143413ada3e631ad6d4e891bc5808a3abd65bb87d6a2c94801dc7c0b3b28781,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718227403919599031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed37f693bb8d6c8d41ce1afd6622548ac8432eaeab0e92e327d9b3c6dc94a239,PodSandboxId:f74141f9f9a70597af8f6e2aea2d137b1cd344e53ec42c2482971e65af18c28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227403898749980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b54e5fc600aecfcda8937c3d5d97e091fdc2ea412252c0646060c2a7caddb,PodSandboxId:695981f57fbc41789dd514a73a69de627bcc012ebc25b8dfff2919b19d927491,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227403923084670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d47884a40251484da885cf97fdc5638aede508a2a170b1694ddcf5ddc2739,PodSandboxId:92f1c525698b21333d3fb84231a2a253dd4b4948bd3b82b7254bc4ff84c6d3ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718227403908279050,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io
.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541,PodSandboxId:967bc61836b86397f64eb984ae919bc26f3006115c179f8402193a817ea9ef80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227396974936993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba8
7272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463,PodSandboxId:a65d17b2d1768a5bab749f25c547d0d8accc5683079b3b037824550703d9b289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227396319132902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6,PodSandboxId:2f6bcb9dcd6989244c1b9096e05350163cd52e14b75bf59753a6aadf30cbd88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227396509516663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba,PodSandboxId:402f7d2555c853141d70a9a1535a4fd3740dad93f3f446b36faa143c9b2b8721,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227396354798603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408,PodSandboxId:0fdae0b8583cd662a0add65b324aa56542781882b028101fa7b0bcba2168c7f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227396328367033,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a,PodSandboxId:deec78588d01221ef82f2e8921dcd3ddb32fe4955c2febd9e50a111949569834,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227396207660071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac56dafd-1da3-473d-b0b9-960e99b9eae4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.974667206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27482d01-7206-49d8-86f6-479fd7648ddb name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.974760995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27482d01-7206-49d8-86f6-479fd7648ddb name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.976137791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0d47f0a-1d9a-4159-902a-473c834bb99f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.976793031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227425976763684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0d47f0a-1d9a-4159-902a-473c834bb99f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.977660012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f811b1e0-aab5-4a52-b28b-ced30c433319 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.977724284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f811b1e0-aab5-4a52-b28b-ced30c433319 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:45 pause-037058 crio[2780]: time="2024-06-12 21:23:45.978044408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb0de71317f3390209a54c48373bb8b2807098ffde057c5185c376cb7ff994b,PodSandboxId:919def9aca75a6d69338455d2b30ad963305362e7d7141adb5977201f513b7ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227408761000261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba87272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6086205349bb9a604254ea7b1e3281b8da5477c042ff4bac6ae5d77498c12,PodSandboxId:9038010f6c043cb6fd6ee683a9de4447569c5499870e0391863b57fdfca84369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227408732573466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355f6b69042f695c626d23527932fbdf663915c5565fcd11085541facedc04a4,PodSandboxId:a143413ada3e631ad6d4e891bc5808a3abd65bb87d6a2c94801dc7c0b3b28781,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718227403919599031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed37f693bb8d6c8d41ce1afd6622548ac8432eaeab0e92e327d9b3c6dc94a239,PodSandboxId:f74141f9f9a70597af8f6e2aea2d137b1cd344e53ec42c2482971e65af18c28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227403898749980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b54e5fc600aecfcda8937c3d5d97e091fdc2ea412252c0646060c2a7caddb,PodSandboxId:695981f57fbc41789dd514a73a69de627bcc012ebc25b8dfff2919b19d927491,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227403923084670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d47884a40251484da885cf97fdc5638aede508a2a170b1694ddcf5ddc2739,PodSandboxId:92f1c525698b21333d3fb84231a2a253dd4b4948bd3b82b7254bc4ff84c6d3ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718227403908279050,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io
.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541,PodSandboxId:967bc61836b86397f64eb984ae919bc26f3006115c179f8402193a817ea9ef80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227396974936993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba8
7272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463,PodSandboxId:a65d17b2d1768a5bab749f25c547d0d8accc5683079b3b037824550703d9b289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227396319132902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6,PodSandboxId:2f6bcb9dcd6989244c1b9096e05350163cd52e14b75bf59753a6aadf30cbd88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227396509516663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba,PodSandboxId:402f7d2555c853141d70a9a1535a4fd3740dad93f3f446b36faa143c9b2b8721,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227396354798603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408,PodSandboxId:0fdae0b8583cd662a0add65b324aa56542781882b028101fa7b0bcba2168c7f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227396328367033,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a,PodSandboxId:deec78588d01221ef82f2e8921dcd3ddb32fe4955c2febd9e50a111949569834,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227396207660071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f811b1e0-aab5-4a52-b28b-ced30c433319 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4bb0de71317f3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 seconds ago      Running             coredns                   2                   919def9aca75a       coredns-7db6d8ff4d-2kgfl
	21b6086205349       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   17 seconds ago      Running             kube-proxy                2                   9038010f6c043       kube-proxy-scm6r
	093b54e5fc600       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   22 seconds ago      Running             kube-controller-manager   2                   695981f57fbc4       kube-controller-manager-pause-037058
	355f6b69042f6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   22 seconds ago      Running             etcd                      2                   a143413ada3e6       etcd-pause-037058
	5c3d47884a402       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   22 seconds ago      Running             kube-apiserver            2                   92f1c525698b2       kube-apiserver-pause-037058
	ed37f693bb8d6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   22 seconds ago      Running             kube-scheduler            2                   f74141f9f9a70       kube-scheduler-pause-037058
	4d71c33b9947b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   29 seconds ago      Exited              coredns                   1                   967bc61836b86       coredns-7db6d8ff4d-2kgfl
	b9fcc14b68850       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   29 seconds ago      Exited              kube-controller-manager   1                   2f6bcb9dcd698       kube-controller-manager-pause-037058
	cd786b5d1eb5c       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   29 seconds ago      Exited              kube-scheduler            1                   402f7d2555c85       kube-scheduler-pause-037058
	50c42e02d02ef       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   29 seconds ago      Exited              kube-apiserver            1                   0fdae0b8583cd       kube-apiserver-pause-037058
	544c4dcbf5456       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   29 seconds ago      Exited              kube-proxy                1                   a65d17b2d1768       kube-proxy-scm6r
	008e0213094fe       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   29 seconds ago      Exited              etcd                      1                   deec78588d012       etcd-pause-037058
	
	
	==> coredns [4bb0de71317f3390209a54c48373bb8b2807098ffde057c5185c376cb7ff994b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48442 - 43804 "HINFO IN 1277118975026617333.5542116640370041542. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015192133s
	
	
	==> coredns [4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541] <==
	
	
	==> describe nodes <==
	Name:               pause-037058
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-037058
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=pause-037058
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T21_22_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:22:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-037058
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:23:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:23:28 +0000   Wed, 12 Jun 2024 21:22:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:23:28 +0000   Wed, 12 Jun 2024 21:22:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:23:28 +0000   Wed, 12 Jun 2024 21:22:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:23:28 +0000   Wed, 12 Jun 2024 21:22:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.183
	  Hostname:    pause-037058
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 253002bff60544a8a83c3314f9d3b9a2
	  System UUID:                253002bf-f605-44a8-a83c-3314f9d3b9a2
	  Boot ID:                    bb272c6b-e016-46d3-809a-560e2b565957
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2kgfl                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     50s
	  kube-system                 etcd-pause-037058                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         66s
	  kube-system                 kube-apiserver-pause-037058             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-controller-manager-pause-037058    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-proxy-scm6r                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kube-scheduler-pause-037058             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 49s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientPID     66s                kubelet          Node pause-037058 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  66s                kubelet          Node pause-037058 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s                kubelet          Node pause-037058 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 66s                kubelet          Starting kubelet.
	  Normal  NodeReady                65s                kubelet          Node pause-037058 status is now: NodeReady
	  Normal  RegisteredNode           51s                node-controller  Node pause-037058 event: Registered Node pause-037058 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-037058 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-037058 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-037058 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node pause-037058 event: Registered Node pause-037058 in Controller
	
	
	==> dmesg <==
	[  +8.277333] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.062043] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062202] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.178557] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.131785] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.347801] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.411775] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.067530] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.297705] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +1.795410] kauditd_printk_skb: 57 callbacks suppressed
	[  +4.737731] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +4.679563] kauditd_printk_skb: 58 callbacks suppressed
	[ +11.184438] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[Jun12 21:23] systemd-fstab-generator[2174]: Ignoring "noauto" option for root device
	[  +0.081630] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.071517] systemd-fstab-generator[2186]: Ignoring "noauto" option for root device
	[  +0.371481] systemd-fstab-generator[2257]: Ignoring "noauto" option for root device
	[  +0.411652] systemd-fstab-generator[2410]: Ignoring "noauto" option for root device
	[  +0.839440] systemd-fstab-generator[2703]: Ignoring "noauto" option for root device
	[  +3.680798] kauditd_printk_skb: 173 callbacks suppressed
	[  +0.593347] systemd-fstab-generator[3302]: Ignoring "noauto" option for root device
	[  +1.770641] systemd-fstab-generator[3470]: Ignoring "noauto" option for root device
	[  +5.711215] kauditd_printk_skb: 109 callbacks suppressed
	[ +11.406155] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.307280] systemd-fstab-generator[3899]: Ignoring "noauto" option for root device
	
	
	==> etcd [008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a] <==
	{"level":"warn","ts":"2024-06-12T21:23:16.973122Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-12T21:23:16.973384Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.61.183:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.61.183:2380","--initial-cluster=pause-037058=https://192.168.61.183:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.61.183:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.61.183:2380","--name=pause-037058","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-06-12T21:23:16.97412Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-06-12T21:23:16.97434Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-12T21:23:16.974447Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.61.183:2380"]}
	{"level":"info","ts":"2024-06-12T21:23:16.974608Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T21:23:16.975973Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.183:2379"]}
	{"level":"info","ts":"2024-06-12T21:23:16.976323Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-037058","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.61.183:2380"],"listen-peer-urls":["https://192.168.61.183:2380"],"advertise-client-urls":["https://192.168.61.183:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.183:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cl
uster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-06-12T21:23:17.023661Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"46.49235ms"}
	{"level":"info","ts":"2024-06-12T21:23:17.100241Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-12T21:23:17.212378Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"438aa8919cf6d084","local-member-id":"378cdee1d1b27193","commit-index":390}
	{"level":"info","ts":"2024-06-12T21:23:17.212704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-12T21:23:17.212811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became follower at term 2"}
	{"level":"info","ts":"2024-06-12T21:23:17.212829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 378cdee1d1b27193 [peers: [], term: 2, commit: 390, applied: 0, lastindex: 390, lastterm: 2]"}
	
	
	==> etcd [355f6b69042f695c626d23527932fbdf663915c5565fcd11085541facedc04a4] <==
	{"level":"info","ts":"2024-06-12T21:23:24.393561Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T21:23:24.39359Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T21:23:24.394148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 switched to configuration voters=(4002819230292668819)"}
	{"level":"info","ts":"2024-06-12T21:23:24.394285Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"438aa8919cf6d084","local-member-id":"378cdee1d1b27193","added-peer-id":"378cdee1d1b27193","added-peer-peer-urls":["https://192.168.61.183:2380"]}
	{"level":"info","ts":"2024-06-12T21:23:24.394502Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"438aa8919cf6d084","local-member-id":"378cdee1d1b27193","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:23:24.394594Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:23:24.416447Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T21:23:24.416736Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.183:2380"}
	{"level":"info","ts":"2024-06-12T21:23:24.416919Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.183:2380"}
	{"level":"info","ts":"2024-06-12T21:23:24.418316Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"378cdee1d1b27193","initial-advertise-peer-urls":["https://192.168.61.183:2380"],"listen-peer-urls":["https://192.168.61.183:2380"],"advertise-client-urls":["https://192.168.61.183:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.183:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-12T21:23:24.418423Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-12T21:23:26.148748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-12T21:23:26.148809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-12T21:23:26.148839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 received MsgPreVoteResp from 378cdee1d1b27193 at term 2"}
	{"level":"info","ts":"2024-06-12T21:23:26.148901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became candidate at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:26.148909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 received MsgVoteResp from 378cdee1d1b27193 at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:26.14893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became leader at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:26.148937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 378cdee1d1b27193 elected leader 378cdee1d1b27193 at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:26.154624Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"378cdee1d1b27193","local-member-attributes":"{Name:pause-037058 ClientURLs:[https://192.168.61.183:2379]}","request-path":"/0/members/378cdee1d1b27193/attributes","cluster-id":"438aa8919cf6d084","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-12T21:23:26.154718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:23:26.155021Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T21:23:26.155115Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-12T21:23:26.155117Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:23:26.157011Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.183:2379"}
	{"level":"info","ts":"2024-06-12T21:23:26.157058Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:23:46 up 1 min,  0 users,  load average: 0.77, 0.25, 0.08
	Linux pause-037058 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408] <==
	I0612 21:23:17.011581       1 options.go:221] external host was not specified, using 192.168.61.183
	I0612 21:23:17.013348       1 server.go:148] Version: v1.30.1
	I0612 21:23:17.013535       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:23:17.376263       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0612 21:23:17.383984       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0612 21:23:17.384022       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0612 21:23:17.384241       1 instance.go:299] Using reconciler: lease
	I0612 21:23:17.387128       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0612 21:23:17.429742       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40808->127.0.0.1:2379: read: connection reset by peer"
	W0612 21:23:17.430027       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40798->127.0.0.1:2379: read: connection reset by peer"
	W0612 21:23:17.430071       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40818->127.0.0.1:2379: read: connection reset by peer"
	
	
	==> kube-apiserver [5c3d47884a40251484da885cf97fdc5638aede508a2a170b1694ddcf5ddc2739] <==
	I0612 21:23:27.935768       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 21:23:27.935922       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 21:23:27.936133       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 21:23:27.946260       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 21:23:27.952144       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 21:23:27.953943       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 21:23:27.954215       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 21:23:27.954460       1 aggregator.go:165] initial CRD sync complete...
	I0612 21:23:27.954563       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 21:23:27.954694       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 21:23:27.954839       1 cache.go:39] Caches are synced for autoregister controller
	I0612 21:23:27.958151       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 21:23:27.958219       1 policy_source.go:224] refreshing policies
	I0612 21:23:27.998162       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 21:23:28.006126       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 21:23:28.014249       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0612 21:23:28.048757       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0612 21:23:28.768112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 21:23:29.761346       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 21:23:29.782513       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 21:23:29.843808       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 21:23:29.894554       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 21:23:29.910146       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 21:23:40.219119       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 21:23:40.272260       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [093b54e5fc600aecfcda8937c3d5d97e091fdc2ea412252c0646060c2a7caddb] <==
	I0612 21:23:40.178126       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 21:23:40.179048       1 shared_informer.go:320] Caches are synced for job
	I0612 21:23:40.174530       1 shared_informer.go:320] Caches are synced for namespace
	I0612 21:23:40.192393       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 21:23:40.203446       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 21:23:40.213811       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 21:23:40.227011       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 21:23:40.227108       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 21:23:40.241115       1 shared_informer.go:320] Caches are synced for taint
	I0612 21:23:40.241223       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 21:23:40.241301       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-037058"
	I0612 21:23:40.241361       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0612 21:23:40.253445       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 21:23:40.257062       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 21:23:40.261976       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 21:23:40.264748       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 21:23:40.312814       1 shared_informer.go:320] Caches are synced for deployment
	I0612 21:23:40.353439       1 shared_informer.go:320] Caches are synced for disruption
	I0612 21:23:40.384996       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 21:23:40.385172       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.053µs"
	I0612 21:23:40.400381       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 21:23:40.405110       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 21:23:40.814968       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 21:23:40.876597       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 21:23:40.876662       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6] <==
	
	
	==> kube-proxy [21b6086205349bb9a604254ea7b1e3281b8da5477c042ff4bac6ae5d77498c12] <==
	I0612 21:23:29.084238       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:23:29.111654       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.183"]
	I0612 21:23:29.209429       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:23:29.209786       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:23:29.210123       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:23:29.215798       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:23:29.216133       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:23:29.216341       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:23:29.220118       1 config.go:192] "Starting service config controller"
	I0612 21:23:29.220233       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:23:29.220378       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:23:29.220458       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:23:29.221353       1 config.go:319] "Starting node config controller"
	I0612 21:23:29.221459       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:23:29.320561       1 shared_informer.go:320] Caches are synced for service config
	I0612 21:23:29.320783       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 21:23:29.321611       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463] <==
	
	
	==> kube-scheduler [cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba] <==
	
	
	==> kube-scheduler [ed37f693bb8d6c8d41ce1afd6622548ac8432eaeab0e92e327d9b3c6dc94a239] <==
	I0612 21:23:24.679121       1 serving.go:380] Generated self-signed cert in-memory
	W0612 21:23:27.882618       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0612 21:23:27.883040       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 21:23:27.883146       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0612 21:23:27.883187       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 21:23:27.978261       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 21:23:27.978317       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:23:27.988557       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 21:23:27.988668       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 21:23:27.989640       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 21:23:27.992999       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 21:23:28.089838       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.609570    3477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/ad9c67a0e8de957ecb1ae600e23986b0-etcd-certs\") pod \"etcd-pause-037058\" (UID: \"ad9c67a0e8de957ecb1ae600e23986b0\") " pod="kube-system/etcd-pause-037058"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.609592    3477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/ad9c67a0e8de957ecb1ae600e23986b0-etcd-data\") pod \"etcd-pause-037058\" (UID: \"ad9c67a0e8de957ecb1ae600e23986b0\") " pod="kube-system/etcd-pause-037058"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: E0612 21:23:23.610507    3477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-037058?timeout=10s\": dial tcp 192.168.61.183:8443: connect: connection refused" interval="400ms"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.707455    3477 kubelet_node_status.go:73] "Attempting to register node" node="pause-037058"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: E0612 21:23:23.708310    3477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.183:8443: connect: connection refused" node="pause-037058"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.877406    3477 scope.go:117] "RemoveContainer" containerID="008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.878606    3477 scope.go:117] "RemoveContainer" containerID="50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.879591    3477 scope.go:117] "RemoveContainer" containerID="b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.880441    3477 scope.go:117] "RemoveContainer" containerID="cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba"
	Jun 12 21:23:24 pause-037058 kubelet[3477]: E0612 21:23:24.012388    3477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-037058?timeout=10s\": dial tcp 192.168.61.183:8443: connect: connection refused" interval="800ms"
	Jun 12 21:23:24 pause-037058 kubelet[3477]: I0612 21:23:24.111816    3477 kubelet_node_status.go:73] "Attempting to register node" node="pause-037058"
	Jun 12 21:23:24 pause-037058 kubelet[3477]: E0612 21:23:24.112695    3477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.183:8443: connect: connection refused" node="pause-037058"
	Jun 12 21:23:24 pause-037058 kubelet[3477]: I0612 21:23:24.914804    3477 kubelet_node_status.go:73] "Attempting to register node" node="pause-037058"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.064982    3477 kubelet_node_status.go:112] "Node was previously registered" node="pause-037058"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.065130    3477 kubelet_node_status.go:76] "Successfully registered node" node="pause-037058"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.068172    3477 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.069531    3477 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.400525    3477 apiserver.go:52] "Watching apiserver"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.404623    3477 topology_manager.go:215] "Topology Admit Handler" podUID="3366c4e4-7aae-4051-97b5-f0544c6dfe66" podNamespace="kube-system" podName="kube-proxy-scm6r"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.406075    3477 topology_manager.go:215] "Topology Admit Handler" podUID="9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2kgfl"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.505277    3477 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.578795    3477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3366c4e4-7aae-4051-97b5-f0544c6dfe66-xtables-lock\") pod \"kube-proxy-scm6r\" (UID: \"3366c4e4-7aae-4051-97b5-f0544c6dfe66\") " pod="kube-system/kube-proxy-scm6r"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.579274    3477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3366c4e4-7aae-4051-97b5-f0544c6dfe66-lib-modules\") pod \"kube-proxy-scm6r\" (UID: \"3366c4e4-7aae-4051-97b5-f0544c6dfe66\") " pod="kube-system/kube-proxy-scm6r"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.707652    3477 scope.go:117] "RemoveContainer" containerID="544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.708241    3477 scope.go:117] "RemoveContainer" containerID="4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-037058 -n pause-037058
helpers_test.go:261: (dbg) Run:  kubectl --context pause-037058 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-037058 -n pause-037058
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-037058 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-037058 logs -n 25: (1.600128835s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-719458             | running-upgrade-719458    | jenkins | v1.33.1 | 12 Jun 24 21:19 UTC | 12 Jun 24 21:21 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-436071           | force-systemd-env-436071  | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC | 12 Jun 24 21:20 UTC |
	| start   | -p force-systemd-flag-732641          | force-systemd-flag-732641 | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC | 12 Jun 24 21:21 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-721096 sudo           | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-721096                | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC | 12 Jun 24 21:20 UTC |
	| start   | -p NoKubernetes-721096                | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:20 UTC | 12 Jun 24 21:21 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-732641 ssh cat     | force-systemd-flag-732641 | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-732641          | force-systemd-flag-732641 | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	| start   | -p cert-expiration-112791             | cert-expiration-112791    | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-721096 sudo           | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-721096                | NoKubernetes-721096       | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	| start   | -p cert-options-449240                | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:22 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-719458             | running-upgrade-719458    | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:21 UTC |
	| start   | -p pause-037058 --memory=2048         | pause-037058              | jenkins | v1.33.1 | 12 Jun 24 21:21 UTC | 12 Jun 24 21:22 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	| start   | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:23 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-449240 ssh               | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-449240 -- sudo        | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-449240                | cert-options-449240       | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:22 UTC |
	| start   | -p stopped-upgrade-776864             | minikube                  | jenkins | v1.26.0 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:23 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p pause-037058                       | pause-037058              | jenkins | v1.33.1 | 12 Jun 24 21:22 UTC | 12 Jun 24 21:23 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:23 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-724108          | kubernetes-upgrade-724108 | jenkins | v1.33.1 | 12 Jun 24 21:23 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-776864 stop           | minikube                  | jenkins | v1.26.0 | 12 Jun 24 21:23 UTC | 12 Jun 24 21:23 UTC |
	| start   | -p stopped-upgrade-776864             | stopped-upgrade-776864    | jenkins | v1.33.1 | 12 Jun 24 21:23 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:23:39
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:23:39.686195   64624 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:23:39.686318   64624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:23:39.686330   64624 out.go:304] Setting ErrFile to fd 2...
	I0612 21:23:39.686337   64624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:23:39.686569   64624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:23:39.687073   64624 out.go:298] Setting JSON to false
	I0612 21:23:39.688054   64624 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7565,"bootTime":1718219855,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:23:39.688115   64624 start.go:139] virtualization: kvm guest
	I0612 21:23:39.691312   64624 out.go:177] * [stopped-upgrade-776864] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:23:39.692759   64624 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:23:39.692774   64624 notify.go:220] Checking for updates...
	I0612 21:23:39.694032   64624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:23:39.695295   64624 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:23:39.696523   64624 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:23:39.697745   64624 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:23:39.699100   64624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:23:39.701001   64624 config.go:182] Loaded profile config "stopped-upgrade-776864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0612 21:23:39.701575   64624 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:39.701628   64624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:39.718499   64624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34999
	I0612 21:23:39.718903   64624 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:39.719434   64624 main.go:141] libmachine: Using API Version  1
	I0612 21:23:39.719456   64624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:39.719827   64624 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:39.719997   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .DriverName
	I0612 21:23:39.721930   64624 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0612 21:23:39.723165   64624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:23:39.723481   64624 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:39.723527   64624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:39.738187   64624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I0612 21:23:39.738620   64624 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:39.739037   64624 main.go:141] libmachine: Using API Version  1
	I0612 21:23:39.739067   64624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:39.739348   64624 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:39.739561   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .DriverName
	I0612 21:23:39.776516   64624 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:23:39.777854   64624 start.go:297] selected driver: kvm2
	I0612 21:23:39.777865   64624 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-776864 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-776
864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0612 21:23:39.777968   64624 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:23:39.778652   64624 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:23:39.778721   64624 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:23:39.793894   64624 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:23:39.794253   64624 cni.go:84] Creating CNI manager for ""
	I0612 21:23:39.794283   64624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:23:39.794368   64624 start.go:340] cluster config:
	{Name:stopped-upgrade-776864 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-776864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0612 21:23:39.794498   64624 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:23:39.797220   64624 out.go:177] * Starting "stopped-upgrade-776864" primary control-plane node in "stopped-upgrade-776864" cluster
	I0612 21:23:39.798220   64624 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0612 21:23:39.798264   64624 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0612 21:23:39.798271   64624 cache.go:56] Caching tarball of preloaded images
	I0612 21:23:39.798352   64624 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:23:39.798363   64624 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0612 21:23:39.798450   64624 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/stopped-upgrade-776864/config.json ...
	I0612 21:23:39.798625   64624 start.go:360] acquireMachinesLock for stopped-upgrade-776864: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:23:39.798676   64624 start.go:364] duration metric: took 31.701µs to acquireMachinesLock for "stopped-upgrade-776864"
	I0612 21:23:39.798695   64624 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:23:39.798704   64624 fix.go:54] fixHost starting: 
	I0612 21:23:39.799015   64624 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:23:39.799050   64624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:23:39.814100   64624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
	I0612 21:23:39.814502   64624 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:23:39.814985   64624 main.go:141] libmachine: Using API Version  1
	I0612 21:23:39.815008   64624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:23:39.815355   64624 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:23:39.815566   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .DriverName
	I0612 21:23:39.815727   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .GetState
	I0612 21:23:39.817330   64624 fix.go:112] recreateIfNeeded on stopped-upgrade-776864: state=Stopped err=<nil>
	I0612 21:23:39.817350   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .DriverName
	W0612 21:23:39.817505   64624 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:23:39.819471   64624 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-776864" ...
	I0612 21:23:39.476435   64208 pod_ready.go:102] pod "etcd-pause-037058" in "kube-system" namespace has status "Ready":"False"
	I0612 21:23:41.476775   64208 pod_ready.go:92] pod "etcd-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:41.476797   64208 pod_ready.go:81] duration metric: took 11.007930368s for pod "etcd-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.476806   64208 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.482289   64208 pod_ready.go:92] pod "kube-apiserver-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:41.482314   64208 pod_ready.go:81] duration metric: took 5.500075ms for pod "kube-apiserver-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.482326   64208 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.495009   64208 pod_ready.go:92] pod "kube-controller-manager-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:41.495038   64208 pod_ready.go:81] duration metric: took 12.703888ms for pod "kube-controller-manager-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.495056   64208 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-scm6r" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.502117   64208 pod_ready.go:92] pod "kube-proxy-scm6r" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:41.502141   64208 pod_ready.go:81] duration metric: took 7.077421ms for pod "kube-proxy-scm6r" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.502152   64208 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.507543   64208 pod_ready.go:92] pod "kube-scheduler-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:41.507562   64208 pod_ready.go:81] duration metric: took 5.403527ms for pod "kube-scheduler-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:41.507569   64208 pod_ready.go:38] duration metric: took 11.557312253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:23:41.507584   64208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:23:41.525158   64208 ops.go:34] apiserver oom_adj: -16
	I0612 21:23:41.525182   64208 kubeadm.go:591] duration metric: took 19.418374734s to restartPrimaryControlPlane
	I0612 21:23:41.525193   64208 kubeadm.go:393] duration metric: took 19.519187283s to StartCluster
	I0612 21:23:41.525213   64208 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:23:41.525331   64208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:23:41.526498   64208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:23:41.526771   64208 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:23:41.528381   64208 out.go:177] * Verifying Kubernetes components...
	I0612 21:23:41.526894   64208 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:23:41.527013   64208 config.go:182] Loaded profile config "pause-037058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:23:41.529627   64208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:23:41.531715   64208 out.go:177] * Enabled addons: 
	I0612 21:23:41.532954   64208 addons.go:510] duration metric: took 6.065504ms for enable addons: enabled=[]
	I0612 21:23:41.737973   64208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:23:41.757312   64208 node_ready.go:35] waiting up to 6m0s for node "pause-037058" to be "Ready" ...
	I0612 21:23:41.761140   64208 node_ready.go:49] node "pause-037058" has status "Ready":"True"
	I0612 21:23:41.761161   64208 node_ready.go:38] duration metric: took 3.814372ms for node "pause-037058" to be "Ready" ...
	I0612 21:23:41.761168   64208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:23:41.875186   64208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2kgfl" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:42.273273   64208 pod_ready.go:92] pod "coredns-7db6d8ff4d-2kgfl" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:42.273314   64208 pod_ready.go:81] duration metric: took 398.09853ms for pod "coredns-7db6d8ff4d-2kgfl" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:42.273328   64208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:42.674105   64208 pod_ready.go:92] pod "etcd-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:42.674135   64208 pod_ready.go:81] duration metric: took 400.79952ms for pod "etcd-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:42.674149   64208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:43.072653   64208 pod_ready.go:92] pod "kube-apiserver-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:43.072680   64208 pod_ready.go:81] duration metric: took 398.523ms for pod "kube-apiserver-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:43.072695   64208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:43.472919   64208 pod_ready.go:92] pod "kube-controller-manager-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:43.472946   64208 pod_ready.go:81] duration metric: took 400.2425ms for pod "kube-controller-manager-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:43.472959   64208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-scm6r" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:39.820772   64624 main.go:141] libmachine: (stopped-upgrade-776864) Calling .Start
	I0612 21:23:39.820949   64624 main.go:141] libmachine: (stopped-upgrade-776864) Ensuring networks are active...
	I0612 21:23:39.821742   64624 main.go:141] libmachine: (stopped-upgrade-776864) Ensuring network default is active
	I0612 21:23:39.822056   64624 main.go:141] libmachine: (stopped-upgrade-776864) Ensuring network mk-stopped-upgrade-776864 is active
	I0612 21:23:39.822365   64624 main.go:141] libmachine: (stopped-upgrade-776864) Getting domain xml...
	I0612 21:23:39.823026   64624 main.go:141] libmachine: (stopped-upgrade-776864) Creating domain...
	I0612 21:23:41.081774   64624 main.go:141] libmachine: (stopped-upgrade-776864) Waiting to get IP...
	I0612 21:23:41.082585   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:41.082979   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:41.083052   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:41.082968   64659 retry.go:31] will retry after 240.481967ms: waiting for machine to come up
	I0612 21:23:41.325611   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:41.326140   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:41.326167   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:41.326099   64659 retry.go:31] will retry after 308.643373ms: waiting for machine to come up
	I0612 21:23:41.636945   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:41.637531   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:41.637557   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:41.637474   64659 retry.go:31] will retry after 438.420138ms: waiting for machine to come up
	I0612 21:23:42.076976   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:42.077534   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:42.077559   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:42.077498   64659 retry.go:31] will retry after 541.198513ms: waiting for machine to come up
	I0612 21:23:42.620200   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:42.620745   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:42.620775   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:42.620690   64659 retry.go:31] will retry after 461.764015ms: waiting for machine to come up
	I0612 21:23:43.084037   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:43.084539   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:43.084566   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:43.084499   64659 retry.go:31] will retry after 795.810621ms: waiting for machine to come up
	I0612 21:23:43.881411   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | domain stopped-upgrade-776864 has defined MAC address 52:54:00:e1:b9:5c in network mk-stopped-upgrade-776864
	I0612 21:23:43.881867   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | unable to find current IP address of domain stopped-upgrade-776864 in network mk-stopped-upgrade-776864
	I0612 21:23:43.881895   64624 main.go:141] libmachine: (stopped-upgrade-776864) DBG | I0612 21:23:43.881812   64659 retry.go:31] will retry after 967.517152ms: waiting for machine to come up
	I0612 21:23:43.873270   64208 pod_ready.go:92] pod "kube-proxy-scm6r" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:43.873304   64208 pod_ready.go:81] duration metric: took 400.336886ms for pod "kube-proxy-scm6r" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:43.873319   64208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:44.273307   64208 pod_ready.go:92] pod "kube-scheduler-pause-037058" in "kube-system" namespace has status "Ready":"True"
	I0612 21:23:44.273334   64208 pod_ready.go:81] duration metric: took 400.006748ms for pod "kube-scheduler-pause-037058" in "kube-system" namespace to be "Ready" ...
	I0612 21:23:44.273341   64208 pod_ready.go:38] duration metric: took 2.512163803s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:23:44.273357   64208 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:23:44.273405   64208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:23:44.288217   64208 api_server.go:72] duration metric: took 2.761402136s to wait for apiserver process to appear ...
	I0612 21:23:44.288251   64208 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:23:44.288277   64208 api_server.go:253] Checking apiserver healthz at https://192.168.61.183:8443/healthz ...
	I0612 21:23:44.296224   64208 api_server.go:279] https://192.168.61.183:8443/healthz returned 200:
	ok
	I0612 21:23:44.297310   64208 api_server.go:141] control plane version: v1.30.1
	I0612 21:23:44.297337   64208 api_server.go:131] duration metric: took 9.078219ms to wait for apiserver health ...
	I0612 21:23:44.297348   64208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:23:44.475296   64208 system_pods.go:59] 6 kube-system pods found
	I0612 21:23:44.475326   64208 system_pods.go:61] "coredns-7db6d8ff4d-2kgfl" [9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc] Running
	I0612 21:23:44.475330   64208 system_pods.go:61] "etcd-pause-037058" [9d83914c-7387-4b15-bc28-3f1f8e9f6254] Running
	I0612 21:23:44.475339   64208 system_pods.go:61] "kube-apiserver-pause-037058" [0c8b0c81-a37d-4759-83f5-74f0fa0e0830] Running
	I0612 21:23:44.475343   64208 system_pods.go:61] "kube-controller-manager-pause-037058" [96c76d53-eec8-480b-bfa0-8d8170424d0f] Running
	I0612 21:23:44.475346   64208 system_pods.go:61] "kube-proxy-scm6r" [3366c4e4-7aae-4051-97b5-f0544c6dfe66] Running
	I0612 21:23:44.475349   64208 system_pods.go:61] "kube-scheduler-pause-037058" [3be565c9-d28b-41f4-b9c0-5af58beb72ad] Running
	I0612 21:23:44.475355   64208 system_pods.go:74] duration metric: took 178.001054ms to wait for pod list to return data ...
	I0612 21:23:44.475361   64208 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:23:44.673667   64208 default_sa.go:45] found service account: "default"
	I0612 21:23:44.673699   64208 default_sa.go:55] duration metric: took 198.330595ms for default service account to be created ...
	I0612 21:23:44.673711   64208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:23:44.875981   64208 system_pods.go:86] 6 kube-system pods found
	I0612 21:23:44.876011   64208 system_pods.go:89] "coredns-7db6d8ff4d-2kgfl" [9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc] Running
	I0612 21:23:44.876017   64208 system_pods.go:89] "etcd-pause-037058" [9d83914c-7387-4b15-bc28-3f1f8e9f6254] Running
	I0612 21:23:44.876021   64208 system_pods.go:89] "kube-apiserver-pause-037058" [0c8b0c81-a37d-4759-83f5-74f0fa0e0830] Running
	I0612 21:23:44.876025   64208 system_pods.go:89] "kube-controller-manager-pause-037058" [96c76d53-eec8-480b-bfa0-8d8170424d0f] Running
	I0612 21:23:44.876028   64208 system_pods.go:89] "kube-proxy-scm6r" [3366c4e4-7aae-4051-97b5-f0544c6dfe66] Running
	I0612 21:23:44.876032   64208 system_pods.go:89] "kube-scheduler-pause-037058" [3be565c9-d28b-41f4-b9c0-5af58beb72ad] Running
	I0612 21:23:44.876040   64208 system_pods.go:126] duration metric: took 202.322901ms to wait for k8s-apps to be running ...
	I0612 21:23:44.876047   64208 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:23:44.876106   64208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:23:44.894305   64208 system_svc.go:56] duration metric: took 18.249302ms WaitForService to wait for kubelet
	I0612 21:23:44.894335   64208 kubeadm.go:576] duration metric: took 3.367533301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:23:44.894354   64208 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:23:45.073494   64208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:23:45.073520   64208 node_conditions.go:123] node cpu capacity is 2
	I0612 21:23:45.073530   64208 node_conditions.go:105] duration metric: took 179.171788ms to run NodePressure ...
	I0612 21:23:45.073540   64208 start.go:240] waiting for startup goroutines ...
	I0612 21:23:45.073546   64208 start.go:245] waiting for cluster config update ...
	I0612 21:23:45.073553   64208 start.go:254] writing updated cluster config ...
	I0612 21:23:45.073812   64208 ssh_runner.go:195] Run: rm -f paused
	I0612 21:23:45.131164   64208 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:23:45.133572   64208 out.go:177] * Done! kubectl is now configured to use "pause-037058" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 12 21:23:47 pause-037058 crio[2780]: time="2024-06-12 21:23:47.939080955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227427939054391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7352825-af98-459f-b677-898ab660aad7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:47 pause-037058 crio[2780]: time="2024-06-12 21:23:47.939738948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5db6311b-2e4c-42d4-b43c-a36f7c30236d name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:47 pause-037058 crio[2780]: time="2024-06-12 21:23:47.939833888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5db6311b-2e4c-42d4-b43c-a36f7c30236d name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:47 pause-037058 crio[2780]: time="2024-06-12 21:23:47.940240763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb0de71317f3390209a54c48373bb8b2807098ffde057c5185c376cb7ff994b,PodSandboxId:919def9aca75a6d69338455d2b30ad963305362e7d7141adb5977201f513b7ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227408761000261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba87272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6086205349bb9a604254ea7b1e3281b8da5477c042ff4bac6ae5d77498c12,PodSandboxId:9038010f6c043cb6fd6ee683a9de4447569c5499870e0391863b57fdfca84369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227408732573466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355f6b69042f695c626d23527932fbdf663915c5565fcd11085541facedc04a4,PodSandboxId:a143413ada3e631ad6d4e891bc5808a3abd65bb87d6a2c94801dc7c0b3b28781,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718227403919599031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed37f693bb8d6c8d41ce1afd6622548ac8432eaeab0e92e327d9b3c6dc94a239,PodSandboxId:f74141f9f9a70597af8f6e2aea2d137b1cd344e53ec42c2482971e65af18c28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227403898749980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b54e5fc600aecfcda8937c3d5d97e091fdc2ea412252c0646060c2a7caddb,PodSandboxId:695981f57fbc41789dd514a73a69de627bcc012ebc25b8dfff2919b19d927491,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227403923084670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d47884a40251484da885cf97fdc5638aede508a2a170b1694ddcf5ddc2739,PodSandboxId:92f1c525698b21333d3fb84231a2a253dd4b4948bd3b82b7254bc4ff84c6d3ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718227403908279050,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io
.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541,PodSandboxId:967bc61836b86397f64eb984ae919bc26f3006115c179f8402193a817ea9ef80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227396974936993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba8
7272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463,PodSandboxId:a65d17b2d1768a5bab749f25c547d0d8accc5683079b3b037824550703d9b289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227396319132902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6,PodSandboxId:2f6bcb9dcd6989244c1b9096e05350163cd52e14b75bf59753a6aadf30cbd88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227396509516663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba,PodSandboxId:402f7d2555c853141d70a9a1535a4fd3740dad93f3f446b36faa143c9b2b8721,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227396354798603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408,PodSandboxId:0fdae0b8583cd662a0add65b324aa56542781882b028101fa7b0bcba2168c7f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227396328367033,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a,PodSandboxId:deec78588d01221ef82f2e8921dcd3ddb32fe4955c2febd9e50a111949569834,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227396207660071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5db6311b-2e4c-42d4-b43c-a36f7c30236d name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.002682942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97a4140e-ebc9-40e3-819b-c5be1a1a4da2 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.002802996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97a4140e-ebc9-40e3-819b-c5be1a1a4da2 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.008800337Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20f4c652-2e38-47b6-ae25-121501ada5c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.009232240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227428009208667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20f4c652-2e38-47b6-ae25-121501ada5c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.010154252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7605a89-1e40-480a-802c-bca7c20721ed name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.010519984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7605a89-1e40-480a-802c-bca7c20721ed name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.013172135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb0de71317f3390209a54c48373bb8b2807098ffde057c5185c376cb7ff994b,PodSandboxId:919def9aca75a6d69338455d2b30ad963305362e7d7141adb5977201f513b7ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227408761000261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba87272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6086205349bb9a604254ea7b1e3281b8da5477c042ff4bac6ae5d77498c12,PodSandboxId:9038010f6c043cb6fd6ee683a9de4447569c5499870e0391863b57fdfca84369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227408732573466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355f6b69042f695c626d23527932fbdf663915c5565fcd11085541facedc04a4,PodSandboxId:a143413ada3e631ad6d4e891bc5808a3abd65bb87d6a2c94801dc7c0b3b28781,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718227403919599031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed37f693bb8d6c8d41ce1afd6622548ac8432eaeab0e92e327d9b3c6dc94a239,PodSandboxId:f74141f9f9a70597af8f6e2aea2d137b1cd344e53ec42c2482971e65af18c28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227403898749980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b54e5fc600aecfcda8937c3d5d97e091fdc2ea412252c0646060c2a7caddb,PodSandboxId:695981f57fbc41789dd514a73a69de627bcc012ebc25b8dfff2919b19d927491,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227403923084670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d47884a40251484da885cf97fdc5638aede508a2a170b1694ddcf5ddc2739,PodSandboxId:92f1c525698b21333d3fb84231a2a253dd4b4948bd3b82b7254bc4ff84c6d3ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718227403908279050,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io
.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541,PodSandboxId:967bc61836b86397f64eb984ae919bc26f3006115c179f8402193a817ea9ef80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227396974936993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba8
7272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463,PodSandboxId:a65d17b2d1768a5bab749f25c547d0d8accc5683079b3b037824550703d9b289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227396319132902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6,PodSandboxId:2f6bcb9dcd6989244c1b9096e05350163cd52e14b75bf59753a6aadf30cbd88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227396509516663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba,PodSandboxId:402f7d2555c853141d70a9a1535a4fd3740dad93f3f446b36faa143c9b2b8721,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227396354798603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408,PodSandboxId:0fdae0b8583cd662a0add65b324aa56542781882b028101fa7b0bcba2168c7f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227396328367033,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a,PodSandboxId:deec78588d01221ef82f2e8921dcd3ddb32fe4955c2febd9e50a111949569834,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227396207660071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7605a89-1e40-480a-802c-bca7c20721ed name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.070691737Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=481048f0-5620-4f33-b875-dd9f4eb21b72 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.070765789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=481048f0-5620-4f33-b875-dd9f4eb21b72 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.072679477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bd1084d-de75-4b74-b350-bad0fd6dfb63 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.073337844Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227428073302831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bd1084d-de75-4b74-b350-bad0fd6dfb63 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.074488105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=350760c5-41ba-45a9-b3e6-727aded2336f name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.074602050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=350760c5-41ba-45a9-b3e6-727aded2336f name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.075049958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb0de71317f3390209a54c48373bb8b2807098ffde057c5185c376cb7ff994b,PodSandboxId:919def9aca75a6d69338455d2b30ad963305362e7d7141adb5977201f513b7ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227408761000261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba87272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6086205349bb9a604254ea7b1e3281b8da5477c042ff4bac6ae5d77498c12,PodSandboxId:9038010f6c043cb6fd6ee683a9de4447569c5499870e0391863b57fdfca84369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227408732573466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355f6b69042f695c626d23527932fbdf663915c5565fcd11085541facedc04a4,PodSandboxId:a143413ada3e631ad6d4e891bc5808a3abd65bb87d6a2c94801dc7c0b3b28781,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718227403919599031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed37f693bb8d6c8d41ce1afd6622548ac8432eaeab0e92e327d9b3c6dc94a239,PodSandboxId:f74141f9f9a70597af8f6e2aea2d137b1cd344e53ec42c2482971e65af18c28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227403898749980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b54e5fc600aecfcda8937c3d5d97e091fdc2ea412252c0646060c2a7caddb,PodSandboxId:695981f57fbc41789dd514a73a69de627bcc012ebc25b8dfff2919b19d927491,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227403923084670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d47884a40251484da885cf97fdc5638aede508a2a170b1694ddcf5ddc2739,PodSandboxId:92f1c525698b21333d3fb84231a2a253dd4b4948bd3b82b7254bc4ff84c6d3ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718227403908279050,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io
.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541,PodSandboxId:967bc61836b86397f64eb984ae919bc26f3006115c179f8402193a817ea9ef80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227396974936993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba8
7272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463,PodSandboxId:a65d17b2d1768a5bab749f25c547d0d8accc5683079b3b037824550703d9b289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227396319132902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6,PodSandboxId:2f6bcb9dcd6989244c1b9096e05350163cd52e14b75bf59753a6aadf30cbd88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227396509516663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba,PodSandboxId:402f7d2555c853141d70a9a1535a4fd3740dad93f3f446b36faa143c9b2b8721,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227396354798603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408,PodSandboxId:0fdae0b8583cd662a0add65b324aa56542781882b028101fa7b0bcba2168c7f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227396328367033,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a,PodSandboxId:deec78588d01221ef82f2e8921dcd3ddb32fe4955c2febd9e50a111949569834,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227396207660071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=350760c5-41ba-45a9-b3e6-727aded2336f name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.120915177Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3bae724-13f7-4298-abf3-c8445c375eed name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.121010495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3bae724-13f7-4298-abf3-c8445c375eed name=/runtime.v1.RuntimeService/Version
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.122711727Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c21ee8d-2c0b-46c4-bb9b-46abb183364b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.123380779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718227428123329784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c21ee8d-2c0b-46c4-bb9b-46abb183364b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.124133225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4776c85d-2cc0-4a6a-89ee-5596aefbe0e7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.124212087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4776c85d-2cc0-4a6a-89ee-5596aefbe0e7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:23:48 pause-037058 crio[2780]: time="2024-06-12 21:23:48.124568298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb0de71317f3390209a54c48373bb8b2807098ffde057c5185c376cb7ff994b,PodSandboxId:919def9aca75a6d69338455d2b30ad963305362e7d7141adb5977201f513b7ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718227408761000261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba87272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6086205349bb9a604254ea7b1e3281b8da5477c042ff4bac6ae5d77498c12,PodSandboxId:9038010f6c043cb6fd6ee683a9de4447569c5499870e0391863b57fdfca84369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718227408732573466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355f6b69042f695c626d23527932fbdf663915c5565fcd11085541facedc04a4,PodSandboxId:a143413ada3e631ad6d4e891bc5808a3abd65bb87d6a2c94801dc7c0b3b28781,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718227403919599031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed37f693bb8d6c8d41ce1afd6622548ac8432eaeab0e92e327d9b3c6dc94a239,PodSandboxId:f74141f9f9a70597af8f6e2aea2d137b1cd344e53ec42c2482971e65af18c28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718227403898749980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]
string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b54e5fc600aecfcda8937c3d5d97e091fdc2ea412252c0646060c2a7caddb,PodSandboxId:695981f57fbc41789dd514a73a69de627bcc012ebc25b8dfff2919b19d927491,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718227403923084670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c3d47884a40251484da885cf97fdc5638aede508a2a170b1694ddcf5ddc2739,PodSandboxId:92f1c525698b21333d3fb84231a2a253dd4b4948bd3b82b7254bc4ff84c6d3ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718227403908279050,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io
.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541,PodSandboxId:967bc61836b86397f64eb984ae919bc26f3006115c179f8402193a817ea9ef80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718227396974936993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2kgfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc,},Annotations:map[string]string{io.kubernetes.container.hash: eba8
7272,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463,PodSandboxId:a65d17b2d1768a5bab749f25c547d0d8accc5683079b3b037824550703d9b289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718227396319132902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-scm6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3366c4e4-7aae-4051-97b5-f0544c6dfe66,},Annotations:map[string]string{io.kubernetes.container.hash: bf7ccae4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6,PodSandboxId:2f6bcb9dcd6989244c1b9096e05350163cd52e14b75bf59753a6aadf30cbd88e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718227396509516663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b3606ac749bea11f18612697c83ec3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba,PodSandboxId:402f7d2555c853141d70a9a1535a4fd3740dad93f3f446b36faa143c9b2b8721,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718227396354798603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d60af0b62c0eac0998c1704b60077f4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408,PodSandboxId:0fdae0b8583cd662a0add65b324aa56542781882b028101fa7b0bcba2168c7f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718227396328367033,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-037058,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89830a681b20ef91a35626d43c7e2eb,},Annotations:map[string]string{io.kubernetes.container.hash: 91e2ac30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a,PodSandboxId:deec78588d01221ef82f2e8921dcd3ddb32fe4955c2febd9e50a111949569834,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718227396207660071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-037058,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad9c67a0e8de957ecb1ae600e23986b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4860bb91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4776c85d-2cc0-4a6a-89ee-5596aefbe0e7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4bb0de71317f3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   919def9aca75a       coredns-7db6d8ff4d-2kgfl
	21b6086205349       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   19 seconds ago      Running             kube-proxy                2                   9038010f6c043       kube-proxy-scm6r
	093b54e5fc600       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   24 seconds ago      Running             kube-controller-manager   2                   695981f57fbc4       kube-controller-manager-pause-037058
	355f6b69042f6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      2                   a143413ada3e6       etcd-pause-037058
	5c3d47884a402       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   24 seconds ago      Running             kube-apiserver            2                   92f1c525698b2       kube-apiserver-pause-037058
	ed37f693bb8d6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   24 seconds ago      Running             kube-scheduler            2                   f74141f9f9a70       kube-scheduler-pause-037058
	4d71c33b9947b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   31 seconds ago      Exited              coredns                   1                   967bc61836b86       coredns-7db6d8ff4d-2kgfl
	b9fcc14b68850       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   31 seconds ago      Exited              kube-controller-manager   1                   2f6bcb9dcd698       kube-controller-manager-pause-037058
	cd786b5d1eb5c       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   31 seconds ago      Exited              kube-scheduler            1                   402f7d2555c85       kube-scheduler-pause-037058
	50c42e02d02ef       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   31 seconds ago      Exited              kube-apiserver            1                   0fdae0b8583cd       kube-apiserver-pause-037058
	544c4dcbf5456       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   31 seconds ago      Exited              kube-proxy                1                   a65d17b2d1768       kube-proxy-scm6r
	008e0213094fe       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   32 seconds ago      Exited              etcd                      1                   deec78588d012       etcd-pause-037058
	
	
	==> coredns [4bb0de71317f3390209a54c48373bb8b2807098ffde057c5185c376cb7ff994b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48442 - 43804 "HINFO IN 1277118975026617333.5542116640370041542. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015192133s
	
	
	==> coredns [4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541] <==
	
	
	==> describe nodes <==
	Name:               pause-037058
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-037058
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=pause-037058
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T21_22_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:22:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-037058
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:23:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:23:28 +0000   Wed, 12 Jun 2024 21:22:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:23:28 +0000   Wed, 12 Jun 2024 21:22:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:23:28 +0000   Wed, 12 Jun 2024 21:22:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:23:28 +0000   Wed, 12 Jun 2024 21:22:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.183
	  Hostname:    pause-037058
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 253002bff60544a8a83c3314f9d3b9a2
	  System UUID:                253002bf-f605-44a8-a83c-3314f9d3b9a2
	  Boot ID:                    bb272c6b-e016-46d3-809a-560e2b565957
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2kgfl                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     52s
	  kube-system                 etcd-pause-037058                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 kube-apiserver-pause-037058             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-controller-manager-pause-037058    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-proxy-scm6r                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-scheduler-pause-037058             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     68s                kubelet          Node pause-037058 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  68s                kubelet          Node pause-037058 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s                kubelet          Node pause-037058 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeReady                67s                kubelet          Node pause-037058 status is now: NodeReady
	  Normal  RegisteredNode           53s                node-controller  Node pause-037058 event: Registered Node pause-037058 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-037058 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-037058 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-037058 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-037058 event: Registered Node pause-037058 in Controller
	
	
	==> dmesg <==
	[  +8.277333] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.062043] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062202] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.178557] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.131785] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.347801] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.411775] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.067530] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.297705] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +1.795410] kauditd_printk_skb: 57 callbacks suppressed
	[  +4.737731] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +4.679563] kauditd_printk_skb: 58 callbacks suppressed
	[ +11.184438] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[Jun12 21:23] systemd-fstab-generator[2174]: Ignoring "noauto" option for root device
	[  +0.081630] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.071517] systemd-fstab-generator[2186]: Ignoring "noauto" option for root device
	[  +0.371481] systemd-fstab-generator[2257]: Ignoring "noauto" option for root device
	[  +0.411652] systemd-fstab-generator[2410]: Ignoring "noauto" option for root device
	[  +0.839440] systemd-fstab-generator[2703]: Ignoring "noauto" option for root device
	[  +3.680798] kauditd_printk_skb: 173 callbacks suppressed
	[  +0.593347] systemd-fstab-generator[3302]: Ignoring "noauto" option for root device
	[  +1.770641] systemd-fstab-generator[3470]: Ignoring "noauto" option for root device
	[  +5.711215] kauditd_printk_skb: 109 callbacks suppressed
	[ +11.406155] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.307280] systemd-fstab-generator[3899]: Ignoring "noauto" option for root device
	
	
	==> etcd [008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a] <==
	{"level":"warn","ts":"2024-06-12T21:23:16.973122Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-12T21:23:16.973384Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.61.183:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.61.183:2380","--initial-cluster=pause-037058=https://192.168.61.183:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.61.183:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.61.183:2380","--name=pause-037058","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-06-12T21:23:16.97412Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-06-12T21:23:16.97434Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-12T21:23:16.974447Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.61.183:2380"]}
	{"level":"info","ts":"2024-06-12T21:23:16.974608Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T21:23:16.975973Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.183:2379"]}
	{"level":"info","ts":"2024-06-12T21:23:16.976323Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-037058","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.61.183:2380"],"listen-peer-urls":["https://192.168.61.183:2380"],"advertise-client-urls":["https://192.168.61.183:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.183:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cl
uster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-06-12T21:23:17.023661Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"46.49235ms"}
	{"level":"info","ts":"2024-06-12T21:23:17.100241Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-12T21:23:17.212378Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"438aa8919cf6d084","local-member-id":"378cdee1d1b27193","commit-index":390}
	{"level":"info","ts":"2024-06-12T21:23:17.212704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-12T21:23:17.212811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became follower at term 2"}
	{"level":"info","ts":"2024-06-12T21:23:17.212829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 378cdee1d1b27193 [peers: [], term: 2, commit: 390, applied: 0, lastindex: 390, lastterm: 2]"}
	
	
	==> etcd [355f6b69042f695c626d23527932fbdf663915c5565fcd11085541facedc04a4] <==
	{"level":"info","ts":"2024-06-12T21:23:24.393561Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T21:23:24.39359Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T21:23:24.394148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 switched to configuration voters=(4002819230292668819)"}
	{"level":"info","ts":"2024-06-12T21:23:24.394285Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"438aa8919cf6d084","local-member-id":"378cdee1d1b27193","added-peer-id":"378cdee1d1b27193","added-peer-peer-urls":["https://192.168.61.183:2380"]}
	{"level":"info","ts":"2024-06-12T21:23:24.394502Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"438aa8919cf6d084","local-member-id":"378cdee1d1b27193","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:23:24.394594Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:23:24.416447Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T21:23:24.416736Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.183:2380"}
	{"level":"info","ts":"2024-06-12T21:23:24.416919Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.183:2380"}
	{"level":"info","ts":"2024-06-12T21:23:24.418316Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"378cdee1d1b27193","initial-advertise-peer-urls":["https://192.168.61.183:2380"],"listen-peer-urls":["https://192.168.61.183:2380"],"advertise-client-urls":["https://192.168.61.183:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.183:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-12T21:23:24.418423Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-12T21:23:26.148748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-12T21:23:26.148809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-12T21:23:26.148839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 received MsgPreVoteResp from 378cdee1d1b27193 at term 2"}
	{"level":"info","ts":"2024-06-12T21:23:26.148901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became candidate at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:26.148909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 received MsgVoteResp from 378cdee1d1b27193 at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:26.14893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became leader at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:26.148937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 378cdee1d1b27193 elected leader 378cdee1d1b27193 at term 3"}
	{"level":"info","ts":"2024-06-12T21:23:26.154624Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"378cdee1d1b27193","local-member-attributes":"{Name:pause-037058 ClientURLs:[https://192.168.61.183:2379]}","request-path":"/0/members/378cdee1d1b27193/attributes","cluster-id":"438aa8919cf6d084","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-12T21:23:26.154718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:23:26.155021Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T21:23:26.155115Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-12T21:23:26.155117Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:23:26.157011Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.183:2379"}
	{"level":"info","ts":"2024-06-12T21:23:26.157058Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:23:48 up 1 min,  0 users,  load average: 0.77, 0.25, 0.08
	Linux pause-037058 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408] <==
	I0612 21:23:17.011581       1 options.go:221] external host was not specified, using 192.168.61.183
	I0612 21:23:17.013348       1 server.go:148] Version: v1.30.1
	I0612 21:23:17.013535       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:23:17.376263       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0612 21:23:17.383984       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0612 21:23:17.384022       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0612 21:23:17.384241       1 instance.go:299] Using reconciler: lease
	I0612 21:23:17.387128       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0612 21:23:17.429742       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40808->127.0.0.1:2379: read: connection reset by peer"
	W0612 21:23:17.430027       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40798->127.0.0.1:2379: read: connection reset by peer"
	W0612 21:23:17.430071       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:40818->127.0.0.1:2379: read: connection reset by peer"
	
	
	==> kube-apiserver [5c3d47884a40251484da885cf97fdc5638aede508a2a170b1694ddcf5ddc2739] <==
	I0612 21:23:27.935768       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 21:23:27.935922       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 21:23:27.936133       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 21:23:27.946260       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 21:23:27.952144       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 21:23:27.953943       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 21:23:27.954215       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 21:23:27.954460       1 aggregator.go:165] initial CRD sync complete...
	I0612 21:23:27.954563       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 21:23:27.954694       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 21:23:27.954839       1 cache.go:39] Caches are synced for autoregister controller
	I0612 21:23:27.958151       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 21:23:27.958219       1 policy_source.go:224] refreshing policies
	I0612 21:23:27.998162       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 21:23:28.006126       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 21:23:28.014249       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0612 21:23:28.048757       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0612 21:23:28.768112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 21:23:29.761346       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 21:23:29.782513       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 21:23:29.843808       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 21:23:29.894554       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 21:23:29.910146       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 21:23:40.219119       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 21:23:40.272260       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [093b54e5fc600aecfcda8937c3d5d97e091fdc2ea412252c0646060c2a7caddb] <==
	I0612 21:23:40.178126       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 21:23:40.179048       1 shared_informer.go:320] Caches are synced for job
	I0612 21:23:40.174530       1 shared_informer.go:320] Caches are synced for namespace
	I0612 21:23:40.192393       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 21:23:40.203446       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 21:23:40.213811       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 21:23:40.227011       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 21:23:40.227108       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 21:23:40.241115       1 shared_informer.go:320] Caches are synced for taint
	I0612 21:23:40.241223       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 21:23:40.241301       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-037058"
	I0612 21:23:40.241361       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0612 21:23:40.253445       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 21:23:40.257062       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 21:23:40.261976       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 21:23:40.264748       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 21:23:40.312814       1 shared_informer.go:320] Caches are synced for deployment
	I0612 21:23:40.353439       1 shared_informer.go:320] Caches are synced for disruption
	I0612 21:23:40.384996       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 21:23:40.385172       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.053µs"
	I0612 21:23:40.400381       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 21:23:40.405110       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 21:23:40.814968       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 21:23:40.876597       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 21:23:40.876662       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6] <==
	
	
	==> kube-proxy [21b6086205349bb9a604254ea7b1e3281b8da5477c042ff4bac6ae5d77498c12] <==
	I0612 21:23:29.084238       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:23:29.111654       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.183"]
	I0612 21:23:29.209429       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:23:29.209786       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:23:29.210123       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:23:29.215798       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:23:29.216133       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:23:29.216341       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:23:29.220118       1 config.go:192] "Starting service config controller"
	I0612 21:23:29.220233       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:23:29.220378       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:23:29.220458       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:23:29.221353       1 config.go:319] "Starting node config controller"
	I0612 21:23:29.221459       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:23:29.320561       1 shared_informer.go:320] Caches are synced for service config
	I0612 21:23:29.320783       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 21:23:29.321611       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463] <==
	
	
	==> kube-scheduler [cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba] <==
	
	
	==> kube-scheduler [ed37f693bb8d6c8d41ce1afd6622548ac8432eaeab0e92e327d9b3c6dc94a239] <==
	I0612 21:23:24.679121       1 serving.go:380] Generated self-signed cert in-memory
	W0612 21:23:27.882618       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0612 21:23:27.883040       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 21:23:27.883146       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0612 21:23:27.883187       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 21:23:27.978261       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 21:23:27.978317       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:23:27.988557       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 21:23:27.988668       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 21:23:27.989640       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 21:23:27.992999       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 21:23:28.089838       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.609570    3477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/ad9c67a0e8de957ecb1ae600e23986b0-etcd-certs\") pod \"etcd-pause-037058\" (UID: \"ad9c67a0e8de957ecb1ae600e23986b0\") " pod="kube-system/etcd-pause-037058"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.609592    3477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/ad9c67a0e8de957ecb1ae600e23986b0-etcd-data\") pod \"etcd-pause-037058\" (UID: \"ad9c67a0e8de957ecb1ae600e23986b0\") " pod="kube-system/etcd-pause-037058"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: E0612 21:23:23.610507    3477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-037058?timeout=10s\": dial tcp 192.168.61.183:8443: connect: connection refused" interval="400ms"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.707455    3477 kubelet_node_status.go:73] "Attempting to register node" node="pause-037058"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: E0612 21:23:23.708310    3477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.183:8443: connect: connection refused" node="pause-037058"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.877406    3477 scope.go:117] "RemoveContainer" containerID="008e0213094feaae473c4f22c7169eee1242f41344516677d0824d274db2f68a"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.878606    3477 scope.go:117] "RemoveContainer" containerID="50c42e02d02ef265c7e5a002369aec304334981a61ed2a6364afe368f9c73408"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.879591    3477 scope.go:117] "RemoveContainer" containerID="b9fcc14b6885077e88628d5ba20ddb714d13a4863c2dd9ec01d5ecbc66e230b6"
	Jun 12 21:23:23 pause-037058 kubelet[3477]: I0612 21:23:23.880441    3477 scope.go:117] "RemoveContainer" containerID="cd786b5d1eb5c7692963978ab7e14f2ebe279fbaaeba9270b1524a55945409ba"
	Jun 12 21:23:24 pause-037058 kubelet[3477]: E0612 21:23:24.012388    3477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-037058?timeout=10s\": dial tcp 192.168.61.183:8443: connect: connection refused" interval="800ms"
	Jun 12 21:23:24 pause-037058 kubelet[3477]: I0612 21:23:24.111816    3477 kubelet_node_status.go:73] "Attempting to register node" node="pause-037058"
	Jun 12 21:23:24 pause-037058 kubelet[3477]: E0612 21:23:24.112695    3477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.183:8443: connect: connection refused" node="pause-037058"
	Jun 12 21:23:24 pause-037058 kubelet[3477]: I0612 21:23:24.914804    3477 kubelet_node_status.go:73] "Attempting to register node" node="pause-037058"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.064982    3477 kubelet_node_status.go:112] "Node was previously registered" node="pause-037058"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.065130    3477 kubelet_node_status.go:76] "Successfully registered node" node="pause-037058"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.068172    3477 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.069531    3477 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.400525    3477 apiserver.go:52] "Watching apiserver"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.404623    3477 topology_manager.go:215] "Topology Admit Handler" podUID="3366c4e4-7aae-4051-97b5-f0544c6dfe66" podNamespace="kube-system" podName="kube-proxy-scm6r"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.406075    3477 topology_manager.go:215] "Topology Admit Handler" podUID="9cf2e0b5-0b9f-4d88-b14b-7e9f35c610fc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2kgfl"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.505277    3477 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.578795    3477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3366c4e4-7aae-4051-97b5-f0544c6dfe66-xtables-lock\") pod \"kube-proxy-scm6r\" (UID: \"3366c4e4-7aae-4051-97b5-f0544c6dfe66\") " pod="kube-system/kube-proxy-scm6r"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.579274    3477 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3366c4e4-7aae-4051-97b5-f0544c6dfe66-lib-modules\") pod \"kube-proxy-scm6r\" (UID: \"3366c4e4-7aae-4051-97b5-f0544c6dfe66\") " pod="kube-system/kube-proxy-scm6r"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.707652    3477 scope.go:117] "RemoveContainer" containerID="544c4dcbf5456e439d42ae9273aeddc8e8a57cdfc4b1d787747e1cb5efa59463"
	Jun 12 21:23:28 pause-037058 kubelet[3477]: I0612 21:23:28.708241    3477 scope.go:117] "RemoveContainer" containerID="4d71c33b9947b2c38a245115fcddcc4f50efbfbd631a39f930316bb8fbf43541"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-037058 -n pause-037058
helpers_test.go:261: (dbg) Run:  kubectl --context pause-037058 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (50.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (279.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-983302 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-983302 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m38.793614446s)

                                                
                                                
-- stdout --
	* [old-k8s-version-983302] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-983302" primary control-plane node in "old-k8s-version-983302" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 21:27:35.268613   73322 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:27:35.268726   73322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:27:35.268737   73322 out.go:304] Setting ErrFile to fd 2...
	I0612 21:27:35.268744   73322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:27:35.268905   73322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:27:35.269480   73322 out.go:298] Setting JSON to false
	I0612 21:27:35.270514   73322 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7800,"bootTime":1718219855,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:27:35.270571   73322 start.go:139] virtualization: kvm guest
	I0612 21:27:35.272916   73322 out.go:177] * [old-k8s-version-983302] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:27:35.274277   73322 notify.go:220] Checking for updates...
	I0612 21:27:35.274305   73322 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:27:35.275719   73322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:27:35.277608   73322 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:27:35.279454   73322 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:27:35.280920   73322 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:27:35.282232   73322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:27:35.283937   73322 config.go:182] Loaded profile config "bridge-701638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:27:35.284087   73322 config.go:182] Loaded profile config "enable-default-cni-701638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:27:35.284184   73322 config.go:182] Loaded profile config "flannel-701638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:27:35.284343   73322 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:27:35.328073   73322 out.go:177] * Using the kvm2 driver based on user configuration
	I0612 21:27:35.329488   73322 start.go:297] selected driver: kvm2
	I0612 21:27:35.329513   73322 start.go:901] validating driver "kvm2" against <nil>
	I0612 21:27:35.329524   73322 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:27:35.330187   73322 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:27:35.330252   73322 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:27:35.347619   73322 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:27:35.347661   73322 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 21:27:35.347845   73322 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:27:35.347903   73322 cni.go:84] Creating CNI manager for ""
	I0612 21:27:35.347915   73322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:27:35.347926   73322 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 21:27:35.347971   73322 start.go:340] cluster config:
	{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:27:35.348070   73322 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:27:35.349964   73322 out.go:177] * Starting "old-k8s-version-983302" primary control-plane node in "old-k8s-version-983302" cluster
	I0612 21:27:35.351164   73322 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:27:35.351233   73322 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0612 21:27:35.351241   73322 cache.go:56] Caching tarball of preloaded images
	I0612 21:27:35.351355   73322 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:27:35.351369   73322 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0612 21:27:35.351477   73322 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:27:35.351500   73322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json: {Name:mkdbdeb1ed13f3820805aab6f58d17a7b4e16dc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:27:35.351664   73322 start.go:360] acquireMachinesLock for old-k8s-version-983302: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:27:35.351709   73322 start.go:364] duration metric: took 25.623µs to acquireMachinesLock for "old-k8s-version-983302"
	I0612 21:27:35.351731   73322 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:27:35.351823   73322 start.go:125] createHost starting for "" (driver="kvm2")
	I0612 21:27:35.353478   73322 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 21:27:35.353600   73322 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:27:35.353642   73322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:27:35.368318   73322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I0612 21:27:35.368786   73322 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:27:35.369374   73322 main.go:141] libmachine: Using API Version  1
	I0612 21:27:35.369404   73322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:27:35.369703   73322 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:27:35.369919   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:27:35.370081   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:27:35.370231   73322 start.go:159] libmachine.API.Create for "old-k8s-version-983302" (driver="kvm2")
	I0612 21:27:35.370267   73322 client.go:168] LocalClient.Create starting
	I0612 21:27:35.370302   73322 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 21:27:35.370335   73322 main.go:141] libmachine: Decoding PEM data...
	I0612 21:27:35.370351   73322 main.go:141] libmachine: Parsing certificate...
	I0612 21:27:35.370404   73322 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 21:27:35.370421   73322 main.go:141] libmachine: Decoding PEM data...
	I0612 21:27:35.370434   73322 main.go:141] libmachine: Parsing certificate...
	I0612 21:27:35.370447   73322 main.go:141] libmachine: Running pre-create checks...
	I0612 21:27:35.370461   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .PreCreateCheck
	I0612 21:27:35.370768   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetConfigRaw
	I0612 21:27:35.371143   73322 main.go:141] libmachine: Creating machine...
	I0612 21:27:35.371158   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .Create
	I0612 21:27:35.371332   73322 main.go:141] libmachine: (old-k8s-version-983302) Creating KVM machine...
	I0612 21:27:35.372571   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found existing default KVM network
	I0612 21:27:35.373811   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:35.373641   73345 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:12:4d:86} reservation:<nil>}
	I0612 21:27:35.374835   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:35.374769   73345 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000288990}
	I0612 21:27:35.374873   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | created network xml: 
	I0612 21:27:35.374891   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | <network>
	I0612 21:27:35.374905   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG |   <name>mk-old-k8s-version-983302</name>
	I0612 21:27:35.374913   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG |   <dns enable='no'/>
	I0612 21:27:35.374923   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG |   
	I0612 21:27:35.374936   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0612 21:27:35.374955   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG |     <dhcp>
	I0612 21:27:35.374963   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0612 21:27:35.374971   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG |     </dhcp>
	I0612 21:27:35.374980   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG |   </ip>
	I0612 21:27:35.375008   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG |   
	I0612 21:27:35.375028   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | </network>
	I0612 21:27:35.375040   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | 
	I0612 21:27:35.380104   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | trying to create private KVM network mk-old-k8s-version-983302 192.168.50.0/24...
	I0612 21:27:35.461598   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | private KVM network mk-old-k8s-version-983302 192.168.50.0/24 created
	I0612 21:27:35.461629   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:35.461573   73345 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:27:35.461643   73322 main.go:141] libmachine: (old-k8s-version-983302) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302 ...
	I0612 21:27:35.461674   73322 main.go:141] libmachine: (old-k8s-version-983302) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 21:27:35.461726   73322 main.go:141] libmachine: (old-k8s-version-983302) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 21:27:35.723925   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:35.723825   73345 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa...
	I0612 21:27:35.807196   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:35.807013   73345 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/old-k8s-version-983302.rawdisk...
	I0612 21:27:35.807243   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Writing magic tar header
	I0612 21:27:35.807263   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Writing SSH key tar header
	I0612 21:27:35.807276   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:35.807223   73345 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302 ...
	I0612 21:27:35.807478   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302
	I0612 21:27:35.807512   73322 main.go:141] libmachine: (old-k8s-version-983302) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302 (perms=drwx------)
	I0612 21:27:35.807530   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 21:27:35.807545   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:27:35.807565   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 21:27:35.807580   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 21:27:35.807592   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Checking permissions on dir: /home/jenkins
	I0612 21:27:35.807610   73322 main.go:141] libmachine: (old-k8s-version-983302) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 21:27:35.807626   73322 main.go:141] libmachine: (old-k8s-version-983302) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 21:27:35.807638   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Checking permissions on dir: /home
	I0612 21:27:35.807652   73322 main.go:141] libmachine: (old-k8s-version-983302) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 21:27:35.807666   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Skipping /home - not owner
	I0612 21:27:35.807686   73322 main.go:141] libmachine: (old-k8s-version-983302) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 21:27:35.807702   73322 main.go:141] libmachine: (old-k8s-version-983302) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 21:27:35.807710   73322 main.go:141] libmachine: (old-k8s-version-983302) Creating domain...
	I0612 21:27:35.808822   73322 main.go:141] libmachine: (old-k8s-version-983302) define libvirt domain using xml: 
	I0612 21:27:35.808844   73322 main.go:141] libmachine: (old-k8s-version-983302) <domain type='kvm'>
	I0612 21:27:35.808871   73322 main.go:141] libmachine: (old-k8s-version-983302)   <name>old-k8s-version-983302</name>
	I0612 21:27:35.808887   73322 main.go:141] libmachine: (old-k8s-version-983302)   <memory unit='MiB'>2200</memory>
	I0612 21:27:35.808898   73322 main.go:141] libmachine: (old-k8s-version-983302)   <vcpu>2</vcpu>
	I0612 21:27:35.808906   73322 main.go:141] libmachine: (old-k8s-version-983302)   <features>
	I0612 21:27:35.808927   73322 main.go:141] libmachine: (old-k8s-version-983302)     <acpi/>
	I0612 21:27:35.808942   73322 main.go:141] libmachine: (old-k8s-version-983302)     <apic/>
	I0612 21:27:35.808964   73322 main.go:141] libmachine: (old-k8s-version-983302)     <pae/>
	I0612 21:27:35.808978   73322 main.go:141] libmachine: (old-k8s-version-983302)     
	I0612 21:27:35.808996   73322 main.go:141] libmachine: (old-k8s-version-983302)   </features>
	I0612 21:27:35.809012   73322 main.go:141] libmachine: (old-k8s-version-983302)   <cpu mode='host-passthrough'>
	I0612 21:27:35.809029   73322 main.go:141] libmachine: (old-k8s-version-983302)   
	I0612 21:27:35.809037   73322 main.go:141] libmachine: (old-k8s-version-983302)   </cpu>
	I0612 21:27:35.809053   73322 main.go:141] libmachine: (old-k8s-version-983302)   <os>
	I0612 21:27:35.809061   73322 main.go:141] libmachine: (old-k8s-version-983302)     <type>hvm</type>
	I0612 21:27:35.809080   73322 main.go:141] libmachine: (old-k8s-version-983302)     <boot dev='cdrom'/>
	I0612 21:27:35.809089   73322 main.go:141] libmachine: (old-k8s-version-983302)     <boot dev='hd'/>
	I0612 21:27:35.809104   73322 main.go:141] libmachine: (old-k8s-version-983302)     <bootmenu enable='no'/>
	I0612 21:27:35.809118   73322 main.go:141] libmachine: (old-k8s-version-983302)   </os>
	I0612 21:27:35.809134   73322 main.go:141] libmachine: (old-k8s-version-983302)   <devices>
	I0612 21:27:35.809148   73322 main.go:141] libmachine: (old-k8s-version-983302)     <disk type='file' device='cdrom'>
	I0612 21:27:35.809172   73322 main.go:141] libmachine: (old-k8s-version-983302)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/boot2docker.iso'/>
	I0612 21:27:35.809188   73322 main.go:141] libmachine: (old-k8s-version-983302)       <target dev='hdc' bus='scsi'/>
	I0612 21:27:35.809201   73322 main.go:141] libmachine: (old-k8s-version-983302)       <readonly/>
	I0612 21:27:35.809211   73322 main.go:141] libmachine: (old-k8s-version-983302)     </disk>
	I0612 21:27:35.809218   73322 main.go:141] libmachine: (old-k8s-version-983302)     <disk type='file' device='disk'>
	I0612 21:27:35.809237   73322 main.go:141] libmachine: (old-k8s-version-983302)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 21:27:35.809254   73322 main.go:141] libmachine: (old-k8s-version-983302)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/old-k8s-version-983302.rawdisk'/>
	I0612 21:27:35.809260   73322 main.go:141] libmachine: (old-k8s-version-983302)       <target dev='hda' bus='virtio'/>
	I0612 21:27:35.809271   73322 main.go:141] libmachine: (old-k8s-version-983302)     </disk>
	I0612 21:27:35.809276   73322 main.go:141] libmachine: (old-k8s-version-983302)     <interface type='network'>
	I0612 21:27:35.809282   73322 main.go:141] libmachine: (old-k8s-version-983302)       <source network='mk-old-k8s-version-983302'/>
	I0612 21:27:35.809290   73322 main.go:141] libmachine: (old-k8s-version-983302)       <model type='virtio'/>
	I0612 21:27:35.809303   73322 main.go:141] libmachine: (old-k8s-version-983302)     </interface>
	I0612 21:27:35.809311   73322 main.go:141] libmachine: (old-k8s-version-983302)     <interface type='network'>
	I0612 21:27:35.809321   73322 main.go:141] libmachine: (old-k8s-version-983302)       <source network='default'/>
	I0612 21:27:35.809333   73322 main.go:141] libmachine: (old-k8s-version-983302)       <model type='virtio'/>
	I0612 21:27:35.809351   73322 main.go:141] libmachine: (old-k8s-version-983302)     </interface>
	I0612 21:27:35.809361   73322 main.go:141] libmachine: (old-k8s-version-983302)     <serial type='pty'>
	I0612 21:27:35.809375   73322 main.go:141] libmachine: (old-k8s-version-983302)       <target port='0'/>
	I0612 21:27:35.809387   73322 main.go:141] libmachine: (old-k8s-version-983302)     </serial>
	I0612 21:27:35.809400   73322 main.go:141] libmachine: (old-k8s-version-983302)     <console type='pty'>
	I0612 21:27:35.809413   73322 main.go:141] libmachine: (old-k8s-version-983302)       <target type='serial' port='0'/>
	I0612 21:27:35.809427   73322 main.go:141] libmachine: (old-k8s-version-983302)     </console>
	I0612 21:27:35.809438   73322 main.go:141] libmachine: (old-k8s-version-983302)     <rng model='virtio'>
	I0612 21:27:35.809459   73322 main.go:141] libmachine: (old-k8s-version-983302)       <backend model='random'>/dev/random</backend>
	I0612 21:27:35.809474   73322 main.go:141] libmachine: (old-k8s-version-983302)     </rng>
	I0612 21:27:35.809487   73322 main.go:141] libmachine: (old-k8s-version-983302)     
	I0612 21:27:35.809499   73322 main.go:141] libmachine: (old-k8s-version-983302)     
	I0612 21:27:35.809507   73322 main.go:141] libmachine: (old-k8s-version-983302)   </devices>
	I0612 21:27:35.809518   73322 main.go:141] libmachine: (old-k8s-version-983302) </domain>
	I0612 21:27:35.809536   73322 main.go:141] libmachine: (old-k8s-version-983302) 
	I0612 21:27:35.814573   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:de:84:26 in network default
	I0612 21:27:35.815102   73322 main.go:141] libmachine: (old-k8s-version-983302) Ensuring networks are active...
	I0612 21:27:35.815126   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:35.816003   73322 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network default is active
	I0612 21:27:35.816179   73322 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network mk-old-k8s-version-983302 is active
	I0612 21:27:35.816940   73322 main.go:141] libmachine: (old-k8s-version-983302) Getting domain xml...
	I0612 21:27:35.819258   73322 main.go:141] libmachine: (old-k8s-version-983302) Creating domain...
	I0612 21:27:37.401649   73322 main.go:141] libmachine: (old-k8s-version-983302) Waiting to get IP...
	I0612 21:27:37.402873   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:37.403558   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:37.403590   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:37.403542   73345 retry.go:31] will retry after 259.963135ms: waiting for machine to come up
	I0612 21:27:37.665268   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:37.666108   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:37.666136   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:37.666073   73345 retry.go:31] will retry after 251.214989ms: waiting for machine to come up
	I0612 21:27:37.918621   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:37.919650   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:37.919681   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:37.919605   73345 retry.go:31] will retry after 343.720017ms: waiting for machine to come up
	I0612 21:27:38.265394   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:38.266049   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:38.266084   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:38.265966   73345 retry.go:31] will retry after 577.194639ms: waiting for machine to come up
	I0612 21:27:38.844458   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:38.845033   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:38.845066   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:38.845001   73345 retry.go:31] will retry after 612.964115ms: waiting for machine to come up
	I0612 21:27:39.460053   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:39.460648   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:39.460680   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:39.460599   73345 retry.go:31] will retry after 638.863541ms: waiting for machine to come up
	I0612 21:27:40.102977   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:40.103749   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:40.103774   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:40.103676   73345 retry.go:31] will retry after 1.025169961s: waiting for machine to come up
	I0612 21:27:41.130668   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:41.131357   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:41.131387   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:41.131320   73345 retry.go:31] will retry after 1.301108705s: waiting for machine to come up
	I0612 21:27:42.434829   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:42.435457   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:42.435482   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:42.435407   73345 retry.go:31] will retry after 1.643444564s: waiting for machine to come up
	I0612 21:27:44.081027   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:44.081661   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:44.081682   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:44.081594   73345 retry.go:31] will retry after 1.900424919s: waiting for machine to come up
	I0612 21:27:45.983580   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:45.984126   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:45.984155   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:45.984080   73345 retry.go:31] will retry after 2.83740215s: waiting for machine to come up
	I0612 21:27:48.824333   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:48.824994   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:48.825017   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:48.824927   73345 retry.go:31] will retry after 2.248588361s: waiting for machine to come up
	I0612 21:27:51.074715   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:51.075285   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:51.075323   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:51.075246   73345 retry.go:31] will retry after 3.265949842s: waiting for machine to come up
	I0612 21:27:54.344429   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:54.344935   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:27:54.344965   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:27:54.344869   73345 retry.go:31] will retry after 4.892162732s: waiting for machine to come up
	I0612 21:27:59.240028   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:59.240549   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has current primary IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:59.240591   73322 main.go:141] libmachine: (old-k8s-version-983302) Found IP for machine: 192.168.50.81
	I0612 21:27:59.240618   73322 main.go:141] libmachine: (old-k8s-version-983302) Reserving static IP address...
	I0612 21:27:59.240994   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"} in network mk-old-k8s-version-983302
	I0612 21:27:59.318688   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Getting to WaitForSSH function...
	I0612 21:27:59.318714   73322 main.go:141] libmachine: (old-k8s-version-983302) Reserved static IP address: 192.168.50.81
	I0612 21:27:59.318724   73322 main.go:141] libmachine: (old-k8s-version-983302) Waiting for SSH to be available...
	I0612 21:27:59.322277   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:27:59.322703   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302
	I0612 21:27:59.322731   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find defined IP address of network mk-old-k8s-version-983302 interface with MAC address 52:54:00:7b:c8:d2
	I0612 21:27:59.322880   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH client type: external
	I0612 21:27:59.322909   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa (-rw-------)
	I0612 21:27:59.322953   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:27:59.322969   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | About to run SSH command:
	I0612 21:27:59.323007   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | exit 0
	I0612 21:27:59.327139   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | SSH cmd err, output: exit status 255: 
	I0612 21:27:59.327164   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0612 21:27:59.327192   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | command : exit 0
	I0612 21:27:59.327205   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | err     : exit status 255
	I0612 21:27:59.327220   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | output  : 
	I0612 21:28:02.327463   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Getting to WaitForSSH function...
	I0612 21:28:02.330187   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.330622   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:02.330650   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.330780   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH client type: external
	I0612 21:28:02.330807   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa (-rw-------)
	I0612 21:28:02.330839   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:28:02.330862   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | About to run SSH command:
	I0612 21:28:02.330874   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | exit 0
	I0612 21:28:02.455940   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | SSH cmd err, output: <nil>: 
	I0612 21:28:02.456263   73322 main.go:141] libmachine: (old-k8s-version-983302) KVM machine creation complete!
	I0612 21:28:02.456672   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetConfigRaw
	I0612 21:28:02.457393   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:28:02.457622   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:28:02.457793   73322 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 21:28:02.457812   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetState
	I0612 21:28:02.459258   73322 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 21:28:02.459280   73322 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 21:28:02.459288   73322 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 21:28:02.459297   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:28:02.462239   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.462687   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:02.462728   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.462850   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:28:02.463021   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:02.463232   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:02.463399   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:28:02.463590   73322 main.go:141] libmachine: Using SSH client type: native
	I0612 21:28:02.463852   73322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:28:02.463871   73322 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 21:28:02.578969   73322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:28:02.578997   73322 main.go:141] libmachine: Detecting the provisioner...
	I0612 21:28:02.579009   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:28:02.582180   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.582569   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:02.582601   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.582785   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:28:02.582954   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:02.583119   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:02.583323   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:28:02.583523   73322 main.go:141] libmachine: Using SSH client type: native
	I0612 21:28:02.583699   73322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:28:02.583711   73322 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 21:28:02.701722   73322 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 21:28:02.701801   73322 main.go:141] libmachine: found compatible host: buildroot
	I0612 21:28:02.701811   73322 main.go:141] libmachine: Provisioning with buildroot...
	I0612 21:28:02.701822   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:28:02.702066   73322 buildroot.go:166] provisioning hostname "old-k8s-version-983302"
	I0612 21:28:02.702098   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:28:02.702308   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:28:02.705380   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.705781   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:02.705831   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.706025   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:28:02.706215   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:02.706417   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:02.706556   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:28:02.706730   73322 main.go:141] libmachine: Using SSH client type: native
	I0612 21:28:02.706950   73322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:28:02.706968   73322 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-983302 && echo "old-k8s-version-983302" | sudo tee /etc/hostname
	I0612 21:28:02.837530   73322 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-983302
	
	I0612 21:28:02.837562   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:28:02.840762   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.841117   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:02.841147   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.841340   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:28:02.841526   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:02.841700   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:02.841900   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:28:02.842142   73322 main.go:141] libmachine: Using SSH client type: native
	I0612 21:28:02.842388   73322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:28:02.842415   73322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-983302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-983302/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-983302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:28:02.965894   73322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:28:02.965937   73322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:28:02.965988   73322 buildroot.go:174] setting up certificates
	I0612 21:28:02.966006   73322 provision.go:84] configureAuth start
	I0612 21:28:02.966024   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:28:02.966334   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:28:02.969682   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.970063   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:02.970097   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.970250   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:28:02.972905   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.973333   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:02.973381   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:02.973561   73322 provision.go:143] copyHostCerts
	I0612 21:28:02.973647   73322 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:28:02.973671   73322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:28:02.973745   73322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:28:02.973884   73322 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:28:02.973902   73322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:28:02.973942   73322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:28:02.974099   73322 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:28:02.974115   73322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:28:02.974152   73322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:28:02.974253   73322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-983302 san=[127.0.0.1 192.168.50.81 localhost minikube old-k8s-version-983302]
	I0612 21:28:03.060990   73322 provision.go:177] copyRemoteCerts
	I0612 21:28:03.061051   73322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:28:03.061078   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:28:03.064109   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.064513   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:03.064540   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.064695   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:28:03.064896   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:03.065065   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:28:03.065213   73322 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:28:03.154236   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:28:03.183111   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0612 21:28:03.215066   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:28:03.245380   73322 provision.go:87] duration metric: took 279.360397ms to configureAuth
	I0612 21:28:03.245410   73322 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:28:03.245605   73322 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:28:03.245696   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:28:03.249300   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.249808   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:03.249843   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.250179   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:28:03.250388   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:03.250613   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:03.250809   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:28:03.250982   73322 main.go:141] libmachine: Using SSH client type: native
	I0612 21:28:03.251232   73322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:28:03.251254   73322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:28:03.584773   73322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:28:03.584803   73322 main.go:141] libmachine: Checking connection to Docker...
	I0612 21:28:03.584814   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetURL
	I0612 21:28:03.586219   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using libvirt version 6000000
	I0612 21:28:03.588972   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.589285   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:03.589316   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.589498   73322 main.go:141] libmachine: Docker is up and running!
	I0612 21:28:03.589512   73322 main.go:141] libmachine: Reticulating splines...
	I0612 21:28:03.589518   73322 client.go:171] duration metric: took 28.219241444s to LocalClient.Create
	I0612 21:28:03.589540   73322 start.go:167] duration metric: took 28.219312088s to libmachine.API.Create "old-k8s-version-983302"
	I0612 21:28:03.589552   73322 start.go:293] postStartSetup for "old-k8s-version-983302" (driver="kvm2")
	I0612 21:28:03.589564   73322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:28:03.589579   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:28:03.589840   73322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:28:03.589878   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:28:03.592471   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.592882   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:03.592918   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.593106   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:28:03.593285   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:03.593536   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:28:03.593700   73322 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:28:03.681725   73322 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:28:03.686730   73322 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:28:03.686766   73322 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:28:03.686830   73322 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:28:03.686920   73322 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:28:03.687053   73322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:28:03.697740   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:28:03.726770   73322 start.go:296] duration metric: took 137.200315ms for postStartSetup
	I0612 21:28:03.726860   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetConfigRaw
	I0612 21:28:03.727538   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:28:03.732252   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.733589   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:03.733621   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.734039   73322 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:28:03.734267   73322 start.go:128] duration metric: took 28.382432665s to createHost
	I0612 21:28:03.734306   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:28:03.738962   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.740798   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:03.740832   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.741053   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:28:03.741291   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:03.741462   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:03.741627   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:28:03.741816   73322 main.go:141] libmachine: Using SSH client type: native
	I0612 21:28:03.742025   73322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:28:03.742040   73322 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0612 21:28:03.856385   73322 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718227683.834007750
	
	I0612 21:28:03.856408   73322 fix.go:216] guest clock: 1718227683.834007750
	I0612 21:28:03.856416   73322 fix.go:229] Guest: 2024-06-12 21:28:03.83400775 +0000 UTC Remote: 2024-06-12 21:28:03.734289661 +0000 UTC m=+28.501097449 (delta=99.718089ms)
	I0612 21:28:03.856450   73322 fix.go:200] guest clock delta is within tolerance: 99.718089ms
	I0612 21:28:03.856459   73322 start.go:83] releasing machines lock for "old-k8s-version-983302", held for 28.50473744s
	I0612 21:28:03.856489   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:28:03.856784   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:28:03.860160   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.860638   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:03.860668   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.860903   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:28:03.861500   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:28:03.861695   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:28:03.861814   73322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:28:03.861859   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:28:03.861870   73322 ssh_runner.go:195] Run: cat /version.json
	I0612 21:28:03.861884   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:28:03.864741   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.864987   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.865110   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:03.865138   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.865310   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:28:03.865447   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:03.865475   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:03.865481   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:03.865652   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:28:03.865660   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:28:03.865855   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:28:03.865863   73322 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:28:03.866008   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:28:03.866202   73322 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:28:03.946200   73322 ssh_runner.go:195] Run: systemctl --version
	I0612 21:28:03.968420   73322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:28:04.137599   73322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:28:04.143753   73322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:28:04.143813   73322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:28:04.161060   73322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:28:04.161086   73322 start.go:494] detecting cgroup driver to use...
	I0612 21:28:04.161151   73322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:28:04.179637   73322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:28:04.193922   73322 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:28:04.193985   73322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:28:04.210160   73322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:28:04.223579   73322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:28:04.340159   73322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:28:04.509721   73322 docker.go:233] disabling docker service ...
	I0612 21:28:04.509786   73322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:28:04.526042   73322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:28:04.540330   73322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:28:04.673978   73322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:28:04.828951   73322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:28:04.846491   73322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:28:04.869199   73322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0612 21:28:04.869273   73322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:28:04.882222   73322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:28:04.882322   73322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:28:04.893341   73322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:28:04.904464   73322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:28:04.914875   73322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:28:04.929228   73322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:28:04.946080   73322 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:28:04.946163   73322 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:28:04.964049   73322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:28:04.976089   73322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:28:05.135022   73322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:28:05.301175   73322 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:28:05.301245   73322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:28:05.306237   73322 start.go:562] Will wait 60s for crictl version
	I0612 21:28:05.306303   73322 ssh_runner.go:195] Run: which crictl
	I0612 21:28:05.310593   73322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:28:05.354808   73322 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:28:05.354893   73322 ssh_runner.go:195] Run: crio --version
	I0612 21:28:05.391745   73322 ssh_runner.go:195] Run: crio --version
	I0612 21:28:05.432103   73322 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0612 21:28:05.433452   73322 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:28:05.437073   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:05.437492   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:27:51 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:28:05.437519   73322 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:28:05.437758   73322 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0612 21:28:05.442488   73322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:28:05.457320   73322 kubeadm.go:877] updating cluster {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:28:05.457470   73322 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:28:05.457543   73322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:28:05.499075   73322 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:28:05.499151   73322 ssh_runner.go:195] Run: which lz4
	I0612 21:28:05.504163   73322 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0612 21:28:05.509394   73322 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:28:05.509425   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0612 21:28:07.629607   73322 crio.go:462] duration metric: took 2.125472511s to copy over tarball
	I0612 21:28:07.629698   73322 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:28:10.685324   73322 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.055578459s)
	I0612 21:28:10.685364   73322 crio.go:469] duration metric: took 3.055719813s to extract the tarball
	I0612 21:28:10.685373   73322 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:28:10.731489   73322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:28:10.780408   73322 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:28:10.780439   73322 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:28:10.780517   73322 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:28:10.780525   73322 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:28:10.780525   73322 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:28:10.780553   73322 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0612 21:28:10.780579   73322 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:28:10.780603   73322 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:28:10.780631   73322 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0612 21:28:10.780603   73322 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:28:10.782188   73322 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:28:10.782204   73322 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:28:10.782254   73322 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:28:10.782283   73322 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:28:10.782187   73322 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0612 21:28:10.782188   73322 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:28:10.782188   73322 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:28:10.782188   73322 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0612 21:28:10.923546   73322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0612 21:28:10.924365   73322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:28:10.930050   73322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:28:10.933642   73322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:28:10.937170   73322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0612 21:28:10.938902   73322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:28:10.946869   73322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0612 21:28:11.026736   73322 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0612 21:28:11.026792   73322 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:28:11.026837   73322 ssh_runner.go:195] Run: which crictl
	I0612 21:28:11.074562   73322 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0612 21:28:11.074612   73322 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:28:11.074667   73322 ssh_runner.go:195] Run: which crictl
	I0612 21:28:11.114029   73322 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0612 21:28:11.114158   73322 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:28:11.114242   73322 ssh_runner.go:195] Run: which crictl
	I0612 21:28:11.123150   73322 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0612 21:28:11.123197   73322 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0612 21:28:11.123215   73322 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:28:11.123229   73322 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0612 21:28:11.123234   73322 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0612 21:28:11.123258   73322 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:28:11.123265   73322 ssh_runner.go:195] Run: which crictl
	I0612 21:28:11.123267   73322 ssh_runner.go:195] Run: which crictl
	I0612 21:28:11.123292   73322 ssh_runner.go:195] Run: which crictl
	I0612 21:28:11.129167   73322 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0612 21:28:11.129221   73322 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0612 21:28:11.129254   73322 ssh_runner.go:195] Run: which crictl
	I0612 21:28:11.129257   73322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:28:11.129279   73322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:28:11.129203   73322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0612 21:28:11.131419   73322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:28:11.136494   73322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:28:11.136578   73322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0612 21:28:11.275323   73322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0612 21:28:11.275355   73322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0612 21:28:11.275417   73322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0612 21:28:11.275479   73322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0612 21:28:11.275505   73322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0612 21:28:11.279722   73322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0612 21:28:11.279775   73322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0612 21:28:11.312413   73322 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0612 21:28:11.594767   73322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:28:11.740006   73322 cache_images.go:92] duration metric: took 959.548613ms to LoadCachedImages
	W0612 21:28:11.740089   73322 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0612 21:28:11.740106   73322 kubeadm.go:928] updating node { 192.168.50.81 8443 v1.20.0 crio true true} ...
	I0612 21:28:11.740234   73322 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-983302 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:28:11.740310   73322 ssh_runner.go:195] Run: crio config
	I0612 21:28:11.793514   73322 cni.go:84] Creating CNI manager for ""
	I0612 21:28:11.793540   73322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:28:11.793552   73322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:28:11.793575   73322 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-983302 NodeName:old-k8s-version-983302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0612 21:28:11.793742   73322 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-983302"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:28:11.793816   73322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0612 21:28:11.806415   73322 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:28:11.806473   73322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:28:11.818167   73322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0612 21:28:11.836998   73322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:28:11.856705   73322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0612 21:28:11.875557   73322 ssh_runner.go:195] Run: grep 192.168.50.81	control-plane.minikube.internal$ /etc/hosts
	I0612 21:28:11.880526   73322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:28:11.898447   73322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:28:12.046516   73322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:28:12.069563   73322 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302 for IP: 192.168.50.81
	I0612 21:28:12.069591   73322 certs.go:194] generating shared ca certs ...
	I0612 21:28:12.069610   73322 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:28:12.069766   73322 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:28:12.069845   73322 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:28:12.069861   73322 certs.go:256] generating profile certs ...
	I0612 21:28:12.069935   73322 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.key
	I0612 21:28:12.069953   73322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.crt with IP's: []
	I0612 21:28:12.438566   73322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.crt ...
	I0612 21:28:12.438604   73322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.crt: {Name:mk8b09f8fef3b624d7df85bc30414fbe123bbd7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:28:12.438809   73322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.key ...
	I0612 21:28:12.438845   73322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.key: {Name:mka56f1fdb0d5c78aa3d259142d04fe2f600446a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:28:12.438967   73322 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key.1098c83c
	I0612 21:28:12.438995   73322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt.1098c83c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.81]
	I0612 21:28:12.702986   73322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt.1098c83c ...
	I0612 21:28:12.703012   73322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt.1098c83c: {Name:mkb06499d89c607add489c804a6359f41f95638e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:28:12.703196   73322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key.1098c83c ...
	I0612 21:28:12.703219   73322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key.1098c83c: {Name:mkee403391f7da0253f58a97c0640fdccf85f21e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:28:12.703320   73322 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt.1098c83c -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt
	I0612 21:28:12.703410   73322 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key.1098c83c -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key
	I0612 21:28:12.703485   73322 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key
	I0612 21:28:12.703511   73322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.crt with IP's: []
	I0612 21:28:12.850479   73322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.crt ...
	I0612 21:28:12.850511   73322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.crt: {Name:mkeea01a8a4f44791762ccc7af9be39473d8387e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:28:12.850714   73322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key ...
	I0612 21:28:12.850737   73322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key: {Name:mke2f35923fafbc462d2fcb19f0c8b0f92594e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:28:12.850991   73322 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:28:12.851054   73322 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:28:12.851070   73322 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:28:12.851102   73322 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:28:12.851133   73322 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:28:12.851166   73322 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:28:12.851238   73322 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:28:12.852037   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:28:12.883688   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:28:12.920825   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:28:12.965100   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:28:12.988957   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0612 21:28:13.014561   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:28:13.042604   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:28:13.070763   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0612 21:28:13.096902   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:28:13.121010   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:28:13.143799   73322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:28:13.169809   73322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:28:13.189660   73322 ssh_runner.go:195] Run: openssl version
	I0612 21:28:13.197370   73322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:28:13.212518   73322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:28:13.218249   73322 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:28:13.218319   73322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:28:13.226183   73322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:28:13.240988   73322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:28:13.254761   73322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:28:13.259864   73322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:28:13.259924   73322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:28:13.267756   73322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:28:13.282527   73322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:28:13.297491   73322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:28:13.303330   73322 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:28:13.303393   73322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:28:13.310332   73322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:28:13.323420   73322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:28:13.328251   73322 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 21:28:13.328297   73322 kubeadm.go:391] StartCluster: {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:28:13.328362   73322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:28:13.328397   73322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:28:13.385547   73322 cri.go:89] found id: ""
	I0612 21:28:13.385618   73322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 21:28:13.396586   73322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:28:13.408065   73322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:28:13.418854   73322 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:28:13.418875   73322 kubeadm.go:156] found existing configuration files:
	
	I0612 21:28:13.418921   73322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:28:13.429162   73322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:28:13.429222   73322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:28:13.440913   73322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:28:13.451660   73322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:28:13.451716   73322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:28:13.466599   73322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:28:13.476547   73322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:28:13.476610   73322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:28:13.487416   73322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:28:13.498111   73322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:28:13.498159   73322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:28:13.508656   73322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:28:13.637681   73322 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:28:13.637954   73322 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:28:13.846242   73322 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:28:13.846385   73322 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:28:13.846504   73322 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:28:14.102812   73322 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:28:14.106255   73322 out.go:204]   - Generating certificates and keys ...
	I0612 21:28:14.106365   73322 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:28:14.106453   73322 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:28:14.690380   73322 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 21:28:14.801796   73322 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 21:28:15.196796   73322 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 21:28:15.407453   73322 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 21:28:15.583907   73322 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 21:28:15.584703   73322 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-983302] and IPs [192.168.50.81 127.0.0.1 ::1]
	I0612 21:28:16.226089   73322 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 21:28:16.226724   73322 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-983302] and IPs [192.168.50.81 127.0.0.1 ::1]
	I0612 21:28:16.445847   73322 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 21:28:16.769560   73322 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 21:28:17.382606   73322 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 21:28:17.382903   73322 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:28:17.444555   73322 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:28:17.779056   73322 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:28:18.559849   73322 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:28:18.899633   73322 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:28:18.920601   73322 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:28:18.921767   73322 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:28:18.921854   73322 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:28:19.068132   73322 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:28:19.239896   73322 out.go:204]   - Booting up control plane ...
	I0612 21:28:19.240115   73322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:28:19.240211   73322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:28:19.240309   73322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:28:19.240418   73322 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:28:19.240638   73322 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:28:59.090475   73322 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:28:59.090623   73322 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:28:59.090854   73322 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:29:04.090619   73322 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:29:04.090874   73322 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:29:14.090192   73322 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:29:14.090470   73322 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:29:34.090466   73322 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:29:34.090763   73322 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:30:14.093291   73322 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:30:14.093777   73322 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:30:14.093807   73322 kubeadm.go:309] 
	I0612 21:30:14.093897   73322 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:30:14.093984   73322 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:30:14.093993   73322 kubeadm.go:309] 
	I0612 21:30:14.094074   73322 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:30:14.094144   73322 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:30:14.094372   73322 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:30:14.094394   73322 kubeadm.go:309] 
	I0612 21:30:14.094626   73322 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:30:14.094702   73322 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:30:14.094776   73322 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:30:14.094788   73322 kubeadm.go:309] 
	I0612 21:30:14.095055   73322 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:30:14.095257   73322 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:30:14.095274   73322 kubeadm.go:309] 
	I0612 21:30:14.095497   73322 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:30:14.095745   73322 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:30:14.096155   73322 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:30:14.096335   73322 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:30:14.096359   73322 kubeadm.go:309] 
	I0612 21:30:14.096834   73322 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:30:14.096945   73322 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:30:14.097032   73322 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0612 21:30:14.097199   73322 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-983302] and IPs [192.168.50.81 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-983302] and IPs [192.168.50.81 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-983302] and IPs [192.168.50.81 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-983302] and IPs [192.168.50.81 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0612 21:30:14.097253   73322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:30:16.220291   73322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.123009757s)
	I0612 21:30:16.220371   73322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:30:16.239734   73322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:30:16.251356   73322 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:30:16.251380   73322 kubeadm.go:156] found existing configuration files:
	
	I0612 21:30:16.251432   73322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:30:16.265647   73322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:30:16.265716   73322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:30:16.278918   73322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:30:16.289824   73322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:30:16.289900   73322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:30:16.301594   73322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:30:16.312179   73322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:30:16.312240   73322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:30:16.323422   73322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:30:16.334771   73322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:30:16.334846   73322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:30:16.346768   73322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:30:16.440798   73322 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:30:16.440854   73322 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:30:16.615524   73322 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:30:16.615664   73322 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:30:16.615814   73322 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:30:16.809680   73322 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:30:16.812575   73322 out.go:204]   - Generating certificates and keys ...
	I0612 21:30:16.812690   73322 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:30:16.812770   73322 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:30:16.812873   73322 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:30:16.812989   73322 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:30:16.813104   73322 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:30:16.813186   73322 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:30:16.813283   73322 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:30:16.813374   73322 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:30:16.813484   73322 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:30:16.813595   73322 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:30:16.813646   73322 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:30:16.813727   73322 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:30:17.087638   73322 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:30:17.433470   73322 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:30:17.683329   73322 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:30:18.203224   73322 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:30:18.224065   73322 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:30:18.225931   73322 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:30:18.226015   73322 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:30:18.385552   73322 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:30:18.387500   73322 out.go:204]   - Booting up control plane ...
	I0612 21:30:18.387635   73322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:30:18.396061   73322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:30:18.398703   73322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:30:18.398821   73322 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:30:18.401505   73322 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:30:58.404136   73322 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:30:58.404275   73322 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:30:58.404547   73322 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:31:03.405142   73322 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:31:03.405358   73322 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:31:13.404827   73322 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:31:13.405078   73322 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:31:33.404471   73322 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:31:33.404682   73322 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:32:13.404523   73322 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:32:13.404722   73322 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:32:13.404737   73322 kubeadm.go:309] 
	I0612 21:32:13.404791   73322 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:32:13.404828   73322 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:32:13.404834   73322 kubeadm.go:309] 
	I0612 21:32:13.404861   73322 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:32:13.404904   73322 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:32:13.405029   73322 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:32:13.405038   73322 kubeadm.go:309] 
	I0612 21:32:13.405129   73322 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:32:13.405166   73322 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:32:13.405197   73322 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:32:13.405204   73322 kubeadm.go:309] 
	I0612 21:32:13.405289   73322 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:32:13.405371   73322 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:32:13.405390   73322 kubeadm.go:309] 
	I0612 21:32:13.405483   73322 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:32:13.405559   73322 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:32:13.405622   73322 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:32:13.405750   73322 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:32:13.405773   73322 kubeadm.go:309] 
	I0612 21:32:13.407151   73322 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:32:13.407281   73322 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:32:13.407436   73322 kubeadm.go:393] duration metric: took 4m0.079142763s to StartCluster
	I0612 21:32:13.407476   73322 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:32:13.407516   73322 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:32:13.407586   73322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:32:13.452853   73322 cri.go:89] found id: ""
	I0612 21:32:13.452881   73322 logs.go:276] 0 containers: []
	W0612 21:32:13.452889   73322 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:32:13.452895   73322 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:32:13.452961   73322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:32:13.492503   73322 cri.go:89] found id: ""
	I0612 21:32:13.492534   73322 logs.go:276] 0 containers: []
	W0612 21:32:13.492543   73322 logs.go:278] No container was found matching "etcd"
	I0612 21:32:13.492549   73322 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:32:13.492606   73322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:32:13.526152   73322 cri.go:89] found id: ""
	I0612 21:32:13.526176   73322 logs.go:276] 0 containers: []
	W0612 21:32:13.526185   73322 logs.go:278] No container was found matching "coredns"
	I0612 21:32:13.526191   73322 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:32:13.526250   73322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:32:13.560479   73322 cri.go:89] found id: ""
	I0612 21:32:13.560511   73322 logs.go:276] 0 containers: []
	W0612 21:32:13.560530   73322 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:32:13.560538   73322 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:32:13.560599   73322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:32:13.598077   73322 cri.go:89] found id: ""
	I0612 21:32:13.598104   73322 logs.go:276] 0 containers: []
	W0612 21:32:13.598115   73322 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:32:13.598122   73322 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:32:13.598185   73322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:32:13.632478   73322 cri.go:89] found id: ""
	I0612 21:32:13.632505   73322 logs.go:276] 0 containers: []
	W0612 21:32:13.632515   73322 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:32:13.632523   73322 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:32:13.632584   73322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:32:13.670188   73322 cri.go:89] found id: ""
	I0612 21:32:13.670213   73322 logs.go:276] 0 containers: []
	W0612 21:32:13.670221   73322 logs.go:278] No container was found matching "kindnet"
	I0612 21:32:13.670231   73322 logs.go:123] Gathering logs for kubelet ...
	I0612 21:32:13.670243   73322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:32:13.720993   73322 logs.go:123] Gathering logs for dmesg ...
	I0612 21:32:13.721020   73322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:32:13.734139   73322 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:32:13.734166   73322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:32:13.870400   73322 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:32:13.870424   73322 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:32:13.870436   73322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:32:13.965076   73322 logs.go:123] Gathering logs for container status ...
	I0612 21:32:13.965111   73322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0612 21:32:14.008821   73322 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0612 21:32:14.008866   73322 out.go:239] * 
	* 
	W0612 21:32:14.008931   73322 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:32:14.008960   73322 out.go:239] * 
	* 
	W0612 21:32:14.010032   73322 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:32:14.015197   73322 out.go:177] 
	W0612 21:32:14.016622   73322 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:32:14.016701   73322 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0612 21:32:14.016729   73322 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0612 21:32:14.018995   73322 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-983302 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 6 (221.627543ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:32:14.285860   79705 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-983302" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-983302" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (279.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-087875 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-087875 --alsologtostderr -v=3: exit status 82 (2m0.525892395s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-087875"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 21:30:15.866128   79038 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:30:15.866366   79038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:30:15.866374   79038 out.go:304] Setting ErrFile to fd 2...
	I0612 21:30:15.866379   79038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:30:15.866541   79038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:30:15.866753   79038 out.go:298] Setting JSON to false
	I0612 21:30:15.866844   79038 mustload.go:65] Loading cluster: no-preload-087875
	I0612 21:30:15.867375   79038 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:30:15.867480   79038 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/config.json ...
	I0612 21:30:15.867719   79038 mustload.go:65] Loading cluster: no-preload-087875
	I0612 21:30:15.867875   79038 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:30:15.867918   79038 stop.go:39] StopHost: no-preload-087875
	I0612 21:30:15.868447   79038 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:30:15.868511   79038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:30:15.883454   79038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35205
	I0612 21:30:15.884053   79038 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:30:15.884582   79038 main.go:141] libmachine: Using API Version  1
	I0612 21:30:15.884605   79038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:30:15.884951   79038 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:30:15.887538   79038 out.go:177] * Stopping node "no-preload-087875"  ...
	I0612 21:30:15.889100   79038 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0612 21:30:15.889152   79038 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:30:15.889437   79038 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0612 21:30:15.889475   79038 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:30:15.893227   79038 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:30:15.893653   79038 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:28:22 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:30:15.893680   79038 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:30:15.893844   79038 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:30:15.894045   79038 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:30:15.894203   79038 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:30:15.894356   79038 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:30:16.007228   79038 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0612 21:30:16.067986   79038 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0612 21:30:16.143838   79038 main.go:141] libmachine: Stopping "no-preload-087875"...
	I0612 21:30:16.143869   79038 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:30:16.145623   79038 main.go:141] libmachine: (no-preload-087875) Calling .Stop
	I0612 21:30:16.149842   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 0/120
	I0612 21:30:17.151357   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 1/120
	I0612 21:30:18.154263   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 2/120
	I0612 21:30:19.156111   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 3/120
	I0612 21:30:20.157618   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 4/120
	I0612 21:30:21.159582   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 5/120
	I0612 21:30:22.160990   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 6/120
	I0612 21:30:23.162391   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 7/120
	I0612 21:30:24.163959   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 8/120
	I0612 21:30:25.165803   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 9/120
	I0612 21:30:26.168203   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 10/120
	I0612 21:30:27.169580   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 11/120
	I0612 21:30:28.170950   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 12/120
	I0612 21:30:29.172120   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 13/120
	I0612 21:30:30.173629   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 14/120
	I0612 21:30:31.175392   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 15/120
	I0612 21:30:32.178093   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 16/120
	I0612 21:30:33.179534   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 17/120
	I0612 21:30:34.181237   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 18/120
	I0612 21:30:35.182531   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 19/120
	I0612 21:30:36.184771   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 20/120
	I0612 21:30:37.186179   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 21/120
	I0612 21:30:38.187467   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 22/120
	I0612 21:30:39.188734   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 23/120
	I0612 21:30:40.190333   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 24/120
	I0612 21:30:41.192473   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 25/120
	I0612 21:30:42.194246   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 26/120
	I0612 21:30:43.196030   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 27/120
	I0612 21:30:44.197768   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 28/120
	I0612 21:30:45.199123   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 29/120
	I0612 21:30:46.201699   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 30/120
	I0612 21:30:47.203068   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 31/120
	I0612 21:30:48.204420   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 32/120
	I0612 21:30:49.206207   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 33/120
	I0612 21:30:50.207579   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 34/120
	I0612 21:30:51.209497   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 35/120
	I0612 21:30:52.210953   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 36/120
	I0612 21:30:53.212335   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 37/120
	I0612 21:30:54.213538   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 38/120
	I0612 21:30:55.214877   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 39/120
	I0612 21:30:56.216081   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 40/120
	I0612 21:30:57.217533   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 41/120
	I0612 21:30:58.218809   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 42/120
	I0612 21:30:59.220027   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 43/120
	I0612 21:31:00.221703   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 44/120
	I0612 21:31:01.223624   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 45/120
	I0612 21:31:02.225640   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 46/120
	I0612 21:31:03.226813   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 47/120
	I0612 21:31:04.227999   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 48/120
	I0612 21:31:05.229478   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 49/120
	I0612 21:31:06.231474   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 50/120
	I0612 21:31:07.233689   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 51/120
	I0612 21:31:08.235092   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 52/120
	I0612 21:31:09.236364   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 53/120
	I0612 21:31:10.237589   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 54/120
	I0612 21:31:11.239493   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 55/120
	I0612 21:31:12.241426   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 56/120
	I0612 21:31:13.242741   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 57/120
	I0612 21:31:14.243849   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 58/120
	I0612 21:31:15.245070   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 59/120
	I0612 21:31:16.247312   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 60/120
	I0612 21:31:17.249388   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 61/120
	I0612 21:31:18.250661   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 62/120
	I0612 21:31:19.251968   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 63/120
	I0612 21:31:20.253105   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 64/120
	I0612 21:31:21.254962   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 65/120
	I0612 21:31:22.256215   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 66/120
	I0612 21:31:23.257378   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 67/120
	I0612 21:31:24.258574   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 68/120
	I0612 21:31:25.259988   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 69/120
	I0612 21:31:26.262085   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 70/120
	I0612 21:31:27.263210   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 71/120
	I0612 21:31:28.264648   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 72/120
	I0612 21:31:29.265783   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 73/120
	I0612 21:31:30.267029   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 74/120
	I0612 21:31:31.268829   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 75/120
	I0612 21:31:32.270062   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 76/120
	I0612 21:31:33.271331   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 77/120
	I0612 21:31:34.273608   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 78/120
	I0612 21:31:35.275013   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 79/120
	I0612 21:31:36.277213   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 80/120
	I0612 21:31:37.278292   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 81/120
	I0612 21:31:38.279784   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 82/120
	I0612 21:31:39.282102   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 83/120
	I0612 21:31:40.283012   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 84/120
	I0612 21:31:41.284610   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 85/120
	I0612 21:31:42.285558   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 86/120
	I0612 21:31:43.286500   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 87/120
	I0612 21:31:44.287472   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 88/120
	I0612 21:31:45.289238   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 89/120
	I0612 21:31:46.291068   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 90/120
	I0612 21:31:47.292342   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 91/120
	I0612 21:31:48.293402   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 92/120
	I0612 21:31:49.294218   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 93/120
	I0612 21:31:50.295562   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 94/120
	I0612 21:31:51.297537   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 95/120
	I0612 21:31:52.298973   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 96/120
	I0612 21:31:53.300366   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 97/120
	I0612 21:31:54.302072   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 98/120
	I0612 21:31:55.303778   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 99/120
	I0612 21:31:56.306105   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 100/120
	I0612 21:31:57.307833   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 101/120
	I0612 21:31:58.309137   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 102/120
	I0612 21:31:59.310727   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 103/120
	I0612 21:32:00.312104   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 104/120
	I0612 21:32:01.314179   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 105/120
	I0612 21:32:02.315993   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 106/120
	I0612 21:32:03.317822   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 107/120
	I0612 21:32:04.319199   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 108/120
	I0612 21:32:05.320573   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 109/120
	I0612 21:32:06.322730   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 110/120
	I0612 21:32:07.324418   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 111/120
	I0612 21:32:08.326707   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 112/120
	I0612 21:32:09.328456   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 113/120
	I0612 21:32:10.330237   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 114/120
	I0612 21:32:11.332250   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 115/120
	I0612 21:32:12.333994   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 116/120
	I0612 21:32:13.335502   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 117/120
	I0612 21:32:14.337755   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 118/120
	I0612 21:32:15.339391   79038 main.go:141] libmachine: (no-preload-087875) Waiting for machine to stop 119/120
	I0612 21:32:16.340677   79038 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0612 21:32:16.340754   79038 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0612 21:32:16.342885   79038 out.go:177] 
	W0612 21:32:16.344282   79038 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0612 21:32:16.344299   79038 out.go:239] * 
	* 
	W0612 21:32:16.347102   79038 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:32:16.348379   79038 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-087875 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-087875 -n no-preload-087875
E0612 21:32:16.530137   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-087875 -n no-preload-087875: exit status 3 (18.512874963s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:32:34.863525   79834 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.63:22: connect: no route to host
	E0612 21:32:34.863547   79834 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.63:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-087875" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-376087 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-376087 --alsologtostderr -v=3: exit status 82 (2m0.510437702s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-376087"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 21:30:18.882837   79124 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:30:18.883227   79124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:30:18.883266   79124 out.go:304] Setting ErrFile to fd 2...
	I0612 21:30:18.883283   79124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:30:18.883586   79124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:30:18.883994   79124 out.go:298] Setting JSON to false
	I0612 21:30:18.884093   79124 mustload.go:65] Loading cluster: default-k8s-diff-port-376087
	I0612 21:30:18.884447   79124 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:30:18.884516   79124 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/config.json ...
	I0612 21:30:18.884678   79124 mustload.go:65] Loading cluster: default-k8s-diff-port-376087
	I0612 21:30:18.884776   79124 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:30:18.884805   79124 stop.go:39] StopHost: default-k8s-diff-port-376087
	I0612 21:30:18.885181   79124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:30:18.885230   79124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:30:18.902632   79124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0612 21:30:18.903146   79124 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:30:18.903703   79124 main.go:141] libmachine: Using API Version  1
	I0612 21:30:18.903722   79124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:30:18.904070   79124 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:30:18.906477   79124 out.go:177] * Stopping node "default-k8s-diff-port-376087"  ...
	I0612 21:30:18.908082   79124 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0612 21:30:18.908142   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:30:18.908377   79124 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0612 21:30:18.908400   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:30:18.911055   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:30:18.911432   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:29:24 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:30:18.911465   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:30:18.911662   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:30:18.911826   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:30:18.911981   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:30:18.912115   79124 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:30:19.007151   79124 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0612 21:30:19.068491   79124 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0612 21:30:19.132033   79124 main.go:141] libmachine: Stopping "default-k8s-diff-port-376087"...
	I0612 21:30:19.132063   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:30:19.133912   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Stop
	I0612 21:30:19.138055   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 0/120
	I0612 21:30:20.139669   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 1/120
	I0612 21:30:21.141815   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 2/120
	I0612 21:30:22.143328   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 3/120
	I0612 21:30:23.144699   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 4/120
	I0612 21:30:24.147183   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 5/120
	I0612 21:30:25.148377   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 6/120
	I0612 21:30:26.149877   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 7/120
	I0612 21:30:27.151070   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 8/120
	I0612 21:30:28.153172   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 9/120
	I0612 21:30:29.154720   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 10/120
	I0612 21:30:30.156358   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 11/120
	I0612 21:30:31.157640   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 12/120
	I0612 21:30:32.159219   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 13/120
	I0612 21:30:33.160927   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 14/120
	I0612 21:30:34.163098   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 15/120
	I0612 21:30:35.164907   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 16/120
	I0612 21:30:36.166631   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 17/120
	I0612 21:30:37.168487   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 18/120
	I0612 21:30:38.170130   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 19/120
	I0612 21:30:39.172410   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 20/120
	I0612 21:30:40.174254   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 21/120
	I0612 21:30:41.176040   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 22/120
	I0612 21:30:42.177565   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 23/120
	I0612 21:30:43.179373   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 24/120
	I0612 21:30:44.181604   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 25/120
	I0612 21:30:45.183074   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 26/120
	I0612 21:30:46.184755   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 27/120
	I0612 21:30:47.186539   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 28/120
	I0612 21:30:48.188291   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 29/120
	I0612 21:30:49.190096   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 30/120
	I0612 21:30:50.191863   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 31/120
	I0612 21:30:51.193634   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 32/120
	I0612 21:30:52.195471   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 33/120
	I0612 21:30:53.197262   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 34/120
	I0612 21:30:54.199625   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 35/120
	I0612 21:30:55.201245   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 36/120
	I0612 21:30:56.202663   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 37/120
	I0612 21:30:57.205033   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 38/120
	I0612 21:30:58.206531   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 39/120
	I0612 21:30:59.208763   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 40/120
	I0612 21:31:00.210479   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 41/120
	I0612 21:31:01.211964   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 42/120
	I0612 21:31:02.213335   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 43/120
	I0612 21:31:03.214712   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 44/120
	I0612 21:31:04.216807   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 45/120
	I0612 21:31:05.218383   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 46/120
	I0612 21:31:06.219969   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 47/120
	I0612 21:31:07.221817   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 48/120
	I0612 21:31:08.223339   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 49/120
	I0612 21:31:09.225322   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 50/120
	I0612 21:31:10.226890   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 51/120
	I0612 21:31:11.228522   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 52/120
	I0612 21:31:12.229990   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 53/120
	I0612 21:31:13.231570   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 54/120
	I0612 21:31:14.233797   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 55/120
	I0612 21:31:15.235258   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 56/120
	I0612 21:31:16.236796   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 57/120
	I0612 21:31:17.238217   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 58/120
	I0612 21:31:18.239828   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 59/120
	I0612 21:31:19.242421   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 60/120
	I0612 21:31:20.243917   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 61/120
	I0612 21:31:21.245512   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 62/120
	I0612 21:31:22.246911   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 63/120
	I0612 21:31:23.248352   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 64/120
	I0612 21:31:24.250493   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 65/120
	I0612 21:31:25.252117   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 66/120
	I0612 21:31:26.253604   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 67/120
	I0612 21:31:27.255124   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 68/120
	I0612 21:31:28.256686   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 69/120
	I0612 21:31:29.258359   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 70/120
	I0612 21:31:30.259887   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 71/120
	I0612 21:31:31.261381   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 72/120
	I0612 21:31:32.262907   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 73/120
	I0612 21:31:33.264487   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 74/120
	I0612 21:31:34.266445   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 75/120
	I0612 21:31:35.268335   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 76/120
	I0612 21:31:36.269814   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 77/120
	I0612 21:31:37.271388   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 78/120
	I0612 21:31:38.273037   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 79/120
	I0612 21:31:39.275369   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 80/120
	I0612 21:31:40.277690   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 81/120
	I0612 21:31:41.279204   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 82/120
	I0612 21:31:42.280914   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 83/120
	I0612 21:31:43.282459   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 84/120
	I0612 21:31:44.284576   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 85/120
	I0612 21:31:45.285918   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 86/120
	I0612 21:31:46.287395   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 87/120
	I0612 21:31:47.289826   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 88/120
	I0612 21:31:48.291152   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 89/120
	I0612 21:31:49.293279   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 90/120
	I0612 21:31:50.294977   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 91/120
	I0612 21:31:51.297427   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 92/120
	I0612 21:31:52.298876   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 93/120
	I0612 21:31:53.300492   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 94/120
	I0612 21:31:54.302297   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 95/120
	I0612 21:31:55.304773   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 96/120
	I0612 21:31:56.306838   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 97/120
	I0612 21:31:57.308645   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 98/120
	I0612 21:31:58.309881   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 99/120
	I0612 21:31:59.311758   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 100/120
	I0612 21:32:00.313402   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 101/120
	I0612 21:32:01.314650   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 102/120
	I0612 21:32:02.316957   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 103/120
	I0612 21:32:03.318714   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 104/120
	I0612 21:32:04.320441   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 105/120
	I0612 21:32:05.321928   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 106/120
	I0612 21:32:06.323267   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 107/120
	I0612 21:32:07.324908   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 108/120
	I0612 21:32:08.326590   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 109/120
	I0612 21:32:09.329200   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 110/120
	I0612 21:32:10.330898   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 111/120
	I0612 21:32:11.332390   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 112/120
	I0612 21:32:12.333845   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 113/120
	I0612 21:32:13.335327   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 114/120
	I0612 21:32:14.337477   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 115/120
	I0612 21:32:15.339121   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 116/120
	I0612 21:32:16.340811   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 117/120
	I0612 21:32:17.342445   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 118/120
	I0612 21:32:18.344027   79124 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for machine to stop 119/120
	I0612 21:32:19.344630   79124 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0612 21:32:19.344695   79124 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0612 21:32:19.346449   79124 out.go:177] 
	W0612 21:32:19.347810   79124 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0612 21:32:19.347824   79124 out.go:239] * 
	* 
	W0612 21:32:19.350419   79124 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:32:19.351767   79124 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-376087 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087
E0612 21:32:26.770578   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:32:29.498086   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:32:29.503371   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:32:29.513649   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:32:29.533929   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:32:29.574194   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:32:29.654576   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:32:29.814819   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:32:30.135419   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:32:30.776362   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:32:32.057184   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087: exit status 3 (18.58097586s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:32:37.935474   79880 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.80:22: connect: no route to host
	E0612 21:32:37.935493   79880 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.80:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-376087" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-591460 --alsologtostderr -v=3
E0612 21:30:34.342887   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:34.348161   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:34.358484   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:34.378746   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:34.419116   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:34.499561   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:34.660581   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:34.981340   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:35.621602   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:36.902147   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:39.463343   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:44.583937   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:30:45.096944   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:30:54.824129   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:31:15.304336   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:31:19.753023   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 21:31:26.057378   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:31:26.516758   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:26.522008   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:26.532265   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:26.552644   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:26.592990   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:26.673378   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:26.834079   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:27.154674   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:27.795195   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:29.076402   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:31.637462   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:36.758623   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:46.999279   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:31:48.613150   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 21:31:56.264585   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:32:06.289185   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:32:06.294484   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:32:06.304713   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:32:06.325540   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:32:06.365829   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:32:06.446243   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:32:06.606874   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:32:06.927476   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:32:07.480340   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:32:07.568645   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:32:08.848871   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:32:11.409744   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-591460 --alsologtostderr -v=3: exit status 82 (2m0.502539693s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-591460"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 21:30:33.289692   79266 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:30:33.289967   79266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:30:33.289978   79266 out.go:304] Setting ErrFile to fd 2...
	I0612 21:30:33.289984   79266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:30:33.290206   79266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:30:33.290529   79266 out.go:298] Setting JSON to false
	I0612 21:30:33.290646   79266 mustload.go:65] Loading cluster: embed-certs-591460
	I0612 21:30:33.290978   79266 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:30:33.291079   79266 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/config.json ...
	I0612 21:30:33.291296   79266 mustload.go:65] Loading cluster: embed-certs-591460
	I0612 21:30:33.291460   79266 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:30:33.291511   79266 stop.go:39] StopHost: embed-certs-591460
	I0612 21:30:33.291991   79266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:30:33.292054   79266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:30:33.307152   79266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36921
	I0612 21:30:33.307666   79266 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:30:33.308264   79266 main.go:141] libmachine: Using API Version  1
	I0612 21:30:33.308288   79266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:30:33.308667   79266 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:30:33.311362   79266 out.go:177] * Stopping node "embed-certs-591460"  ...
	I0612 21:30:33.313282   79266 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0612 21:30:33.313313   79266 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:30:33.313579   79266 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0612 21:30:33.313614   79266 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:30:33.316916   79266 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:30:33.317358   79266 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:28:57 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:30:33.317401   79266 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:30:33.317506   79266 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:30:33.317727   79266 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:30:33.317893   79266 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:30:33.318066   79266 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:30:33.417783   79266 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0612 21:30:33.471432   79266 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0612 21:30:33.534157   79266 main.go:141] libmachine: Stopping "embed-certs-591460"...
	I0612 21:30:33.534188   79266 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:30:33.535800   79266 main.go:141] libmachine: (embed-certs-591460) Calling .Stop
	I0612 21:30:33.539606   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 0/120
	I0612 21:30:34.541198   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 1/120
	I0612 21:30:35.543156   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 2/120
	I0612 21:30:36.545204   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 3/120
	I0612 21:30:37.546892   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 4/120
	I0612 21:30:38.549567   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 5/120
	I0612 21:30:39.551363   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 6/120
	I0612 21:30:40.553451   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 7/120
	I0612 21:30:41.554887   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 8/120
	I0612 21:30:42.556460   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 9/120
	I0612 21:30:43.558516   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 10/120
	I0612 21:30:44.560571   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 11/120
	I0612 21:30:45.561959   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 12/120
	I0612 21:30:46.563695   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 13/120
	I0612 21:30:47.565632   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 14/120
	I0612 21:30:48.567809   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 15/120
	I0612 21:30:49.569363   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 16/120
	I0612 21:30:50.571140   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 17/120
	I0612 21:30:51.572705   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 18/120
	I0612 21:30:52.574472   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 19/120
	I0612 21:30:53.576894   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 20/120
	I0612 21:30:54.578591   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 21/120
	I0612 21:30:55.580077   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 22/120
	I0612 21:30:56.581684   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 23/120
	I0612 21:30:57.582921   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 24/120
	I0612 21:30:58.585019   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 25/120
	I0612 21:30:59.586663   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 26/120
	I0612 21:31:00.588131   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 27/120
	I0612 21:31:01.590056   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 28/120
	I0612 21:31:02.591632   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 29/120
	I0612 21:31:03.593973   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 30/120
	I0612 21:31:04.595555   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 31/120
	I0612 21:31:05.597046   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 32/120
	I0612 21:31:06.598695   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 33/120
	I0612 21:31:07.600683   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 34/120
	I0612 21:31:08.602749   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 35/120
	I0612 21:31:09.604432   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 36/120
	I0612 21:31:10.606125   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 37/120
	I0612 21:31:11.607461   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 38/120
	I0612 21:31:12.608998   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 39/120
	I0612 21:31:13.611285   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 40/120
	I0612 21:31:14.612695   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 41/120
	I0612 21:31:15.614112   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 42/120
	I0612 21:31:16.615540   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 43/120
	I0612 21:31:17.617074   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 44/120
	I0612 21:31:18.619198   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 45/120
	I0612 21:31:19.620627   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 46/120
	I0612 21:31:20.621960   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 47/120
	I0612 21:31:21.623588   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 48/120
	I0612 21:31:22.625137   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 49/120
	I0612 21:31:23.627697   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 50/120
	I0612 21:31:24.629231   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 51/120
	I0612 21:31:25.630697   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 52/120
	I0612 21:31:26.632107   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 53/120
	I0612 21:31:27.633526   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 54/120
	I0612 21:31:28.635673   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 55/120
	I0612 21:31:29.637085   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 56/120
	I0612 21:31:30.639206   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 57/120
	I0612 21:31:31.640487   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 58/120
	I0612 21:31:32.642134   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 59/120
	I0612 21:31:33.644213   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 60/120
	I0612 21:31:34.645859   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 61/120
	I0612 21:31:35.647582   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 62/120
	I0612 21:31:36.648952   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 63/120
	I0612 21:31:37.650324   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 64/120
	I0612 21:31:38.652449   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 65/120
	I0612 21:31:39.653913   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 66/120
	I0612 21:31:40.655445   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 67/120
	I0612 21:31:41.657002   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 68/120
	I0612 21:31:42.658500   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 69/120
	I0612 21:31:43.660784   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 70/120
	I0612 21:31:44.662304   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 71/120
	I0612 21:31:45.663717   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 72/120
	I0612 21:31:46.665541   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 73/120
	I0612 21:31:47.666842   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 74/120
	I0612 21:31:48.668811   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 75/120
	I0612 21:31:49.670365   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 76/120
	I0612 21:31:50.671773   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 77/120
	I0612 21:31:51.673306   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 78/120
	I0612 21:31:52.674975   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 79/120
	I0612 21:31:53.677028   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 80/120
	I0612 21:31:54.678356   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 81/120
	I0612 21:31:55.679812   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 82/120
	I0612 21:31:56.681158   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 83/120
	I0612 21:31:57.682697   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 84/120
	I0612 21:31:58.684745   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 85/120
	I0612 21:31:59.686259   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 86/120
	I0612 21:32:00.687874   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 87/120
	I0612 21:32:01.689359   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 88/120
	I0612 21:32:02.691072   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 89/120
	I0612 21:32:03.693368   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 90/120
	I0612 21:32:04.694912   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 91/120
	I0612 21:32:05.696319   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 92/120
	I0612 21:32:06.697773   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 93/120
	I0612 21:32:07.699294   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 94/120
	I0612 21:32:08.701461   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 95/120
	I0612 21:32:09.702918   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 96/120
	I0612 21:32:10.704402   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 97/120
	I0612 21:32:11.705620   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 98/120
	I0612 21:32:12.707458   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 99/120
	I0612 21:32:13.709601   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 100/120
	I0612 21:32:14.710924   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 101/120
	I0612 21:32:15.712443   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 102/120
	I0612 21:32:16.713716   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 103/120
	I0612 21:32:17.715137   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 104/120
	I0612 21:32:18.717489   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 105/120
	I0612 21:32:19.719008   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 106/120
	I0612 21:32:20.720497   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 107/120
	I0612 21:32:21.722019   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 108/120
	I0612 21:32:22.723562   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 109/120
	I0612 21:32:23.725890   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 110/120
	I0612 21:32:24.727413   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 111/120
	I0612 21:32:25.728842   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 112/120
	I0612 21:32:26.730252   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 113/120
	I0612 21:32:27.731765   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 114/120
	I0612 21:32:28.734008   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 115/120
	I0612 21:32:29.735527   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 116/120
	I0612 21:32:30.737012   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 117/120
	I0612 21:32:31.738528   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 118/120
	I0612 21:32:32.740201   79266 main.go:141] libmachine: (embed-certs-591460) Waiting for machine to stop 119/120
	I0612 21:32:33.741650   79266 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0612 21:32:33.741698   79266 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0612 21:32:33.743773   79266 out.go:177] 
	W0612 21:32:33.745205   79266 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0612 21:32:33.745225   79266 out.go:239] * 
	* 
	W0612 21:32:33.747941   79266 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:32:33.749214   79266 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-591460 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-591460 -n embed-certs-591460
E0612 21:32:34.617744   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-591460 -n embed-certs-591460: exit status 3 (18.520867079s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:32:52.271569   79960 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E0612 21:32:52.271595   79960 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-591460" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-983302 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-983302 create -f testdata/busybox.yaml: exit status 1 (46.464223ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-983302" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-983302 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 6 (214.602341ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:32:14.547884   79745 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-983302" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-983302" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 6 (224.691728ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:32:14.772633   79775 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-983302" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-983302" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-983302 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-983302 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.196802077s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-983302 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-983302 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-983302 describe deploy/metrics-server -n kube-system: exit status 1 (42.361548ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-983302" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-983302 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 6 (220.969582ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:33:51.233941   80648 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-983302" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-983302" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-087875 -n no-preload-087875
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-087875 -n no-preload-087875: exit status 3 (3.172266038s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:32:38.035428   79989 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.63:22: connect: no route to host
	E0612 21:32:38.035448   79989 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.63:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-087875 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0612 21:32:39.738577   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-087875 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.149317344s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.63:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-087875 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-087875 -n no-preload-087875
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-087875 -n no-preload-087875: exit status 3 (3.06209249s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:32:47.247526   80118 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.63:22: connect: no route to host
	E0612 21:32:47.247543   80118 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.63:22: connect: no route to host

                                                
                                                
** /stderr **
E0612 21:32:47.250741   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-087875" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087: exit status 3 (3.168626123s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:32:41.103576   80035 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.80:22: connect: no route to host
	E0612 21:32:41.103613   80035 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.80:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-376087 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-376087 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153639423s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.80:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-376087 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087
E0612 21:32:47.978312   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:32:48.441079   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:32:49.979245   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087: exit status 3 (3.061963971s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:32:50.319623   80163 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.80:22: connect: no route to host
	E0612 21:32:50.319642   80163 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.80:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-376087" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-591460 -n embed-certs-591460
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-591460 -n embed-certs-591460: exit status 3 (3.167726515s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:32:55.439574   80277 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E0612 21:32:55.439593   80277 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-591460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-591460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152612474s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-591460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-591460 -n embed-certs-591460
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-591460 -n embed-certs-591460: exit status 3 (3.062990719s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0612 21:33:04.655541   80358 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E0612 21:33:04.655562   80358 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-591460" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (765.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-983302 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0612 21:33:55.257097   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:55.385454   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:34:10.361512   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:34:36.218242   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:34:36.345795   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:34:50.131384   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:34:56.704711   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 21:35:04.134613   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:35:13.341603   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:35:31.820778   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:35:34.342853   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:35:58.138625   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:35:58.267001   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:36:02.027595   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:36:26.517692   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:36:48.613443   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 21:36:54.201935   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:37:06.290458   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:37:29.498258   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:37:33.973031   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:37:57.182094   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:38:11.662158   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 21:38:14.295519   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:38:14.422083   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:38:41.979058   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:38:42.107927   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:39:56.704132   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 21:40:04.134305   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:40:34.343073   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:41:26.517063   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:41:48.613270   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 21:42:06.289522   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-983302 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m42.432896083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-983302] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-983302" primary control-plane node in "old-k8s-version-983302" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-983302" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 21:33:52.855557   80762 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:33:52.855829   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.855839   80762 out.go:304] Setting ErrFile to fd 2...
	I0612 21:33:52.855845   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.856037   80762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:33:52.856582   80762 out.go:298] Setting JSON to false
	I0612 21:33:52.857472   80762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8178,"bootTime":1718219855,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:33:52.857527   80762 start.go:139] virtualization: kvm guest
	I0612 21:33:52.859369   80762 out.go:177] * [old-k8s-version-983302] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:33:52.860886   80762 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:33:52.860907   80762 notify.go:220] Checking for updates...
	I0612 21:33:52.862185   80762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:33:52.863642   80762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:33:52.865031   80762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:33:52.866306   80762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:33:52.867535   80762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:33:52.869148   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:33:52.869530   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.869597   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.884278   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0612 21:33:52.884743   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.885211   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.885234   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.885575   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.885768   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.887577   80762 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0612 21:33:52.888972   80762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:33:52.889265   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.889296   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.903649   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
	I0612 21:33:52.904087   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.904500   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.904518   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.904831   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.904988   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.939030   80762 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:33:52.940484   80762 start.go:297] selected driver: kvm2
	I0612 21:33:52.940497   80762 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.940622   80762 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:33:52.941314   80762 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.941389   80762 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:33:52.956273   80762 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:33:52.956646   80762 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:33:52.956674   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:33:52.956682   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:33:52.956715   80762 start.go:340] cluster config:
	{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.956828   80762 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.958634   80762 out.go:177] * Starting "old-k8s-version-983302" primary control-plane node in "old-k8s-version-983302" cluster
	I0612 21:33:52.959924   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:33:52.959963   80762 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0612 21:33:52.959970   80762 cache.go:56] Caching tarball of preloaded images
	I0612 21:33:52.960065   80762 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:33:52.960079   80762 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0612 21:33:52.960190   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:33:52.960397   80762 start.go:360] acquireMachinesLock for old-k8s-version-983302: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:38:04.923975   80762 start.go:364] duration metric: took 4m11.963543792s to acquireMachinesLock for "old-k8s-version-983302"
	I0612 21:38:04.924056   80762 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:04.924068   80762 fix.go:54] fixHost starting: 
	I0612 21:38:04.924507   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:04.924543   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:04.942022   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0612 21:38:04.942428   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:04.942891   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:38:04.942917   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:04.943345   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:04.943553   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:04.943726   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetState
	I0612 21:38:04.945403   80762 fix.go:112] recreateIfNeeded on old-k8s-version-983302: state=Stopped err=<nil>
	I0612 21:38:04.945427   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	W0612 21:38:04.945581   80762 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:04.947672   80762 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-983302" ...
	I0612 21:38:04.949078   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .Start
	I0612 21:38:04.949226   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring networks are active...
	I0612 21:38:04.949936   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network default is active
	I0612 21:38:04.950371   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network mk-old-k8s-version-983302 is active
	I0612 21:38:04.950813   80762 main.go:141] libmachine: (old-k8s-version-983302) Getting domain xml...
	I0612 21:38:04.951549   80762 main.go:141] libmachine: (old-k8s-version-983302) Creating domain...
	I0612 21:38:06.296150   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting to get IP...
	I0612 21:38:06.296978   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.297465   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.297570   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.297453   81824 retry.go:31] will retry after 256.609938ms: waiting for machine to come up
	I0612 21:38:06.556307   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.556935   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.556967   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.556884   81824 retry.go:31] will retry after 285.754887ms: waiting for machine to come up
	I0612 21:38:06.844674   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.845227   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.845255   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.845171   81824 retry.go:31] will retry after 326.266367ms: waiting for machine to come up
	I0612 21:38:07.172788   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.173414   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.173447   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.173353   81824 retry.go:31] will retry after 393.443927ms: waiting for machine to come up
	I0612 21:38:07.568084   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.568645   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.568673   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.568609   81824 retry.go:31] will retry after 726.66775ms: waiting for machine to come up
	I0612 21:38:08.296811   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.297295   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.297319   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.297250   81824 retry.go:31] will retry after 658.540746ms: waiting for machine to come up
	I0612 21:38:08.957164   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.957611   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.957635   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.957576   81824 retry.go:31] will retry after 921.725713ms: waiting for machine to come up
	I0612 21:38:09.880881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:09.881672   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:09.881703   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:09.881604   81824 retry.go:31] will retry after 1.355846361s: waiting for machine to come up
	I0612 21:38:11.238616   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:11.239058   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:11.239094   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:11.238996   81824 retry.go:31] will retry after 1.3469357s: waiting for machine to come up
	I0612 21:38:12.587245   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:12.587747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:12.587785   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:12.587683   81824 retry.go:31] will retry after 1.616666063s: waiting for machine to come up
	I0612 21:38:14.206281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:14.206781   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:14.206810   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:14.206716   81824 retry.go:31] will retry after 2.057638604s: waiting for machine to come up
	I0612 21:38:16.266372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:16.266920   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:16.266955   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:16.266858   81824 retry.go:31] will retry after 2.387834661s: waiting for machine to come up
	I0612 21:38:18.656575   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:18.657074   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:18.657111   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:18.657022   81824 retry.go:31] will retry after 3.518256927s: waiting for machine to come up
	I0612 21:38:22.176416   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176901   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has current primary IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176930   80762 main.go:141] libmachine: (old-k8s-version-983302) Found IP for machine: 192.168.50.81
	I0612 21:38:22.176965   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserving static IP address...
	I0612 21:38:22.177385   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserved static IP address: 192.168.50.81
	I0612 21:38:22.177422   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.177435   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting for SSH to be available...
	I0612 21:38:22.177459   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | skip adding static IP to network mk-old-k8s-version-983302 - found existing host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"}
	I0612 21:38:22.177471   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Getting to WaitForSSH function...
	I0612 21:38:22.179728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180130   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.180158   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180273   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH client type: external
	I0612 21:38:22.180334   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa (-rw-------)
	I0612 21:38:22.180368   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:22.180387   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | About to run SSH command:
	I0612 21:38:22.180399   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | exit 0
	I0612 21:38:22.308621   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:22.308979   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetConfigRaw
	I0612 21:38:22.309620   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.312747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313124   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.313155   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313421   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:38:22.313635   80762 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:22.313658   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:22.313884   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.316476   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.316961   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.317014   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.317221   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.317408   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317600   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317775   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.317955   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.318195   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.318207   80762 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:22.431693   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:22.431728   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.431979   80762 buildroot.go:166] provisioning hostname "old-k8s-version-983302"
	I0612 21:38:22.432006   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.432191   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.434830   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435267   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.435300   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435431   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.435598   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435718   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435826   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.436056   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.436237   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.436252   80762 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-983302 && echo "old-k8s-version-983302" | sudo tee /etc/hostname
	I0612 21:38:22.563119   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-983302
	
	I0612 21:38:22.563184   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.565915   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.566315   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566513   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.566704   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.566885   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.567021   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.567243   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.567463   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.567490   80762 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-983302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-983302/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-983302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:22.690443   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:22.690474   80762 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:22.690494   80762 buildroot.go:174] setting up certificates
	I0612 21:38:22.690504   80762 provision.go:84] configureAuth start
	I0612 21:38:22.690514   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.690774   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.693186   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693528   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.693576   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693689   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.695948   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696285   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.696318   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696432   80762 provision.go:143] copyHostCerts
	I0612 21:38:22.696501   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:22.696521   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:22.696583   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:22.696662   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:22.696671   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:22.696693   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:22.696774   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:22.696784   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:22.696803   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:22.696847   80762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-983302 san=[127.0.0.1 192.168.50.81 localhost minikube old-k8s-version-983302]
	I0612 21:38:22.863618   80762 provision.go:177] copyRemoteCerts
	I0612 21:38:22.863672   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:22.863698   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.866979   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867371   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.867403   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867548   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.867734   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.867904   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.868126   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:22.958350   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:38:22.984409   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:23.009623   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0612 21:38:23.038026   80762 provision.go:87] duration metric: took 347.510898ms to configureAuth
	I0612 21:38:23.038063   80762 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:23.038309   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:38:23.038390   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.041196   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041634   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.041660   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041842   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.042044   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042222   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042410   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.042580   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.042780   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.042799   80762 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:23.324862   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:23.324893   80762 machine.go:97] duration metric: took 1.01124225s to provisionDockerMachine
	I0612 21:38:23.324904   80762 start.go:293] postStartSetup for "old-k8s-version-983302" (driver="kvm2")
	I0612 21:38:23.324913   80762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:23.324928   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.325240   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:23.325274   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.328007   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328343   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.328372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328578   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.328770   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.328939   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.329068   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.416040   80762 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:23.420586   80762 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:23.420607   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:23.420674   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:23.420739   80762 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:23.420823   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:23.432266   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:23.460619   80762 start.go:296] duration metric: took 135.703593ms for postStartSetup
	I0612 21:38:23.460661   80762 fix.go:56] duration metric: took 18.536593686s for fixHost
	I0612 21:38:23.460684   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.463415   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463745   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.463780   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463909   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.464110   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464248   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464378   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.464533   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.464742   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.464754   80762 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0612 21:38:23.576211   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228303.539451044
	
	I0612 21:38:23.576231   80762 fix.go:216] guest clock: 1718228303.539451044
	I0612 21:38:23.576239   80762 fix.go:229] Guest: 2024-06-12 21:38:23.539451044 +0000 UTC Remote: 2024-06-12 21:38:23.460665921 +0000 UTC m=+270.637213069 (delta=78.785123ms)
	I0612 21:38:23.576285   80762 fix.go:200] guest clock delta is within tolerance: 78.785123ms
	I0612 21:38:23.576291   80762 start.go:83] releasing machines lock for "old-k8s-version-983302", held for 18.65227368s
	I0612 21:38:23.576316   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.576617   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:23.579493   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.579881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.579913   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.580120   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580693   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580865   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580952   80762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:23.581005   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.581111   80762 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:23.581141   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.584053   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584262   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584443   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584479   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584587   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584690   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584757   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.584855   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584918   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.584980   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.585067   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.585115   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.585227   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.666055   80762 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:23.688409   80762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:23.848030   80762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:23.855302   80762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:23.855383   80762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:23.874362   80762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:23.874389   80762 start.go:494] detecting cgroup driver to use...
	I0612 21:38:23.874461   80762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:23.893239   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:23.909774   80762 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:23.909844   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:23.926084   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:23.943341   80762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:24.072731   80762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:24.244551   80762 docker.go:233] disabling docker service ...
	I0612 21:38:24.244624   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:24.261862   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:24.277051   80762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:24.426146   80762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:24.560634   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:24.575339   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:24.595965   80762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0612 21:38:24.596043   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.607814   80762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:24.607892   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.619001   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.630982   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.644326   80762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:24.658640   80762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:24.673944   80762 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:24.673994   80762 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:24.693853   80762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:24.709251   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:24.856222   80762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:25.023760   80762 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:25.023842   80762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:25.029449   80762 start.go:562] Will wait 60s for crictl version
	I0612 21:38:25.029522   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:25.033750   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:25.080911   80762 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:25.081018   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.111727   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.145999   80762 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0612 21:38:25.147420   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:25.151029   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151402   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:25.151432   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151726   80762 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:25.156561   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:25.171243   80762 kubeadm.go:877] updating cluster {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:25.171386   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:38:25.171429   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:25.225872   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:25.225936   80762 ssh_runner.go:195] Run: which lz4
	I0612 21:38:25.230447   80762 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0612 21:38:25.235452   80762 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:25.235485   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0612 21:38:27.033962   80762 crio.go:462] duration metric: took 1.803565745s to copy over tarball
	I0612 21:38:27.034045   80762 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:30.212028   80762 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177947965s)
	I0612 21:38:30.212073   80762 crio.go:469] duration metric: took 3.178080815s to extract the tarball
	I0612 21:38:30.212085   80762 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:30.256957   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:30.297891   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:30.297917   80762 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:30.298025   80762 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.298045   80762 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.298055   80762 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.298021   80762 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0612 21:38:30.298106   80762 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.298062   80762 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.298004   80762 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.298079   80762 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0612 21:38:30.299842   80762 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.299848   80762 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.299843   80762 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.299866   80762 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.299876   80762 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.299905   80762 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.466739   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0612 21:38:30.516078   80762 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0612 21:38:30.516127   80762 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0612 21:38:30.516174   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.520362   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0612 21:38:30.545437   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.563320   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0612 21:38:30.599110   80762 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0612 21:38:30.599155   80762 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.599217   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.603578   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.639450   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0612 21:38:30.649462   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.650602   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.652555   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.656970   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.672136   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.766185   80762 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0612 21:38:30.766233   80762 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.766279   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.778901   80762 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0612 21:38:30.778946   80762 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.778952   80762 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0612 21:38:30.778983   80762 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.778994   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.779041   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.793610   80762 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0612 21:38:30.793650   80762 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.793698   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.807451   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.807482   80762 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0612 21:38:30.807518   80762 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.807458   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.807518   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.807557   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.807559   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.916470   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0612 21:38:30.916564   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0612 21:38:30.916576   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0612 21:38:30.916603   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0612 21:38:30.916646   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.953152   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0612 21:38:31.194046   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:31.341827   80762 cache_images.go:92] duration metric: took 1.043891497s to LoadCachedImages
	W0612 21:38:31.341922   80762 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0612 21:38:31.341937   80762 kubeadm.go:928] updating node { 192.168.50.81 8443 v1.20.0 crio true true} ...
	I0612 21:38:31.342064   80762 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-983302 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:31.342154   80762 ssh_runner.go:195] Run: crio config
	I0612 21:38:31.395673   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:38:31.395706   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:31.395722   80762 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:31.395744   80762 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-983302 NodeName:old-k8s-version-983302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0612 21:38:31.395918   80762 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-983302"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:31.395995   80762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0612 21:38:31.410706   80762 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:31.410785   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:31.425161   80762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0612 21:38:31.445883   80762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:31.463605   80762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0612 21:38:31.482797   80762 ssh_runner.go:195] Run: grep 192.168.50.81	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:31.486974   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:31.499681   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:31.645490   80762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:31.668769   80762 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302 for IP: 192.168.50.81
	I0612 21:38:31.668797   80762 certs.go:194] generating shared ca certs ...
	I0612 21:38:31.668820   80762 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:31.668987   80762 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:31.669061   80762 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:31.669088   80762 certs.go:256] generating profile certs ...
	I0612 21:38:31.669212   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.key
	I0612 21:38:31.669309   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key.1098c83c
	I0612 21:38:31.669373   80762 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key
	I0612 21:38:31.669548   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:31.669598   80762 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:31.669613   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:31.669662   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:31.669723   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:31.669759   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:31.669830   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:31.670835   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:31.717330   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:31.754900   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:31.798099   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:31.839647   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0612 21:38:31.883454   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:31.920765   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:31.953069   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0612 21:38:31.978134   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:32.002475   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:32.027784   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:32.053563   80762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:32.074493   80762 ssh_runner.go:195] Run: openssl version
	I0612 21:38:32.080620   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:32.093531   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098615   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098688   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.104777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:32.116551   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:32.130188   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135197   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135279   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.142777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:32.156051   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:32.169866   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175249   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175340   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.181561   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:32.193430   80762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:32.198235   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:32.204654   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:32.210771   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:32.216966   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:32.223203   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:32.230990   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:32.237290   80762 kubeadm.go:391] StartCluster: {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:32.237446   80762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:32.237503   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.282436   80762 cri.go:89] found id: ""
	I0612 21:38:32.282516   80762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:32.295283   80762 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:32.295313   80762 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:32.295321   80762 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:32.295400   80762 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:32.307483   80762 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:32.308555   80762 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-983302" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:32.309335   80762 kubeconfig.go:62] /home/jenkins/minikube-integration/17779-14199/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-983302" cluster setting kubeconfig missing "old-k8s-version-983302" context setting]
	I0612 21:38:32.310486   80762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:32.397524   80762 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:32.411765   80762 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.81
	I0612 21:38:32.411797   80762 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:32.411807   80762 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:32.411849   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.460009   80762 cri.go:89] found id: ""
	I0612 21:38:32.460078   80762 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:32.481670   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:32.493664   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:32.493684   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:32.493734   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:32.503974   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:32.504044   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:32.515971   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:32.525772   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:32.525832   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:32.537137   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.548539   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:32.548600   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.560401   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:32.570608   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:32.570681   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:32.582763   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:32.594407   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:32.734633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.526337   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.768139   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.896716   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.986708   80762 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:33.986832   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.487194   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.987580   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.486966   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.487534   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.987526   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:37.487035   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:37.986904   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.487262   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.986907   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.486895   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.987060   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.487385   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.987049   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.487325   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.987550   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:42.487225   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:42.987579   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.487465   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.987265   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.487935   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.987399   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.487793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.986898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.486985   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.986848   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.486947   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.987863   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.487299   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.986886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.486972   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.987859   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.487034   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.987724   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.486948   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:52.487668   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:52.987635   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.487500   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.987860   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.487855   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.986868   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.487259   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.987902   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.487535   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.987269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:57.487542   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:57.987222   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.486976   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.986913   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.487269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.987289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.987690   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.487283   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.987541   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:02.487589   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:02.987853   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.487382   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.987303   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.487852   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.987464   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.486928   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.987660   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:07.487497   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:07.987732   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.486974   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.486941   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.986929   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.487754   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.987685   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.486910   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.987457   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.486873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.987394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.486915   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.987880   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.486881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.986951   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.487462   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.986850   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.487213   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.987066   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:17.487882   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:17.987273   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.987836   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.487622   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.987381   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.487005   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.987638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.487670   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.987552   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:22.487438   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:22.987165   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.487122   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.987804   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.487583   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.987647   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.487126   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.987251   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.987044   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:27.486911   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:27.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.487496   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.987166   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.487892   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.987787   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.487315   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.987933   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.487255   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:32.487881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:32.987267   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.487678   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.987296   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:33.987371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:34.028670   80762 cri.go:89] found id: ""
	I0612 21:39:34.028699   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.028710   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:34.028717   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:34.028778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:34.068371   80762 cri.go:89] found id: ""
	I0612 21:39:34.068400   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.068412   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:34.068419   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:34.068485   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:34.104605   80762 cri.go:89] found id: ""
	I0612 21:39:34.104634   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.104643   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:34.104650   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:34.104745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:34.150301   80762 cri.go:89] found id: ""
	I0612 21:39:34.150327   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.150335   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:34.150341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:34.150396   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:34.191426   80762 cri.go:89] found id: ""
	I0612 21:39:34.191462   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.191475   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:34.191484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:34.191562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:34.228483   80762 cri.go:89] found id: ""
	I0612 21:39:34.228523   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.228535   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:34.228543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:34.228653   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:34.262834   80762 cri.go:89] found id: ""
	I0612 21:39:34.262863   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.262873   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:34.262881   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:34.262944   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:34.298283   80762 cri.go:89] found id: ""
	I0612 21:39:34.298312   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.298321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:34.298330   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:34.298340   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:34.350889   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:34.350918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:34.365264   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:34.365289   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:34.508130   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:34.508162   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:34.508180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:34.572036   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:34.572076   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.114371   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:37.127410   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:37.127492   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:37.168684   80762 cri.go:89] found id: ""
	I0612 21:39:37.168705   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.168714   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:37.168723   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:37.168798   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:37.208765   80762 cri.go:89] found id: ""
	I0612 21:39:37.208797   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.208808   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:37.208815   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:37.208875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:37.266245   80762 cri.go:89] found id: ""
	I0612 21:39:37.266270   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.266277   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:37.266283   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:37.266331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:37.313557   80762 cri.go:89] found id: ""
	I0612 21:39:37.313586   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.313597   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:37.313606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:37.313677   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:37.353292   80762 cri.go:89] found id: ""
	I0612 21:39:37.353318   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.353325   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:37.353332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:37.353389   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:37.391940   80762 cri.go:89] found id: ""
	I0612 21:39:37.391974   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.391984   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:37.392015   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:37.392078   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:37.432133   80762 cri.go:89] found id: ""
	I0612 21:39:37.432154   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.432166   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:37.432174   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:37.432228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:37.468274   80762 cri.go:89] found id: ""
	I0612 21:39:37.468302   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.468310   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:37.468328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:37.468347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:37.543904   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:37.543941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.586957   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:37.586982   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:37.641247   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:37.641288   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:37.657076   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:37.657101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:37.729279   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:40.229638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:40.243825   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:40.243889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:40.282795   80762 cri.go:89] found id: ""
	I0612 21:39:40.282821   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.282829   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:40.282834   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:40.282879   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:40.320211   80762 cri.go:89] found id: ""
	I0612 21:39:40.320236   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.320246   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:40.320252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:40.320338   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:40.356270   80762 cri.go:89] found id: ""
	I0612 21:39:40.356292   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.356300   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:40.356306   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:40.356353   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:40.394667   80762 cri.go:89] found id: ""
	I0612 21:39:40.394691   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.394699   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:40.394704   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:40.394751   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:40.432765   80762 cri.go:89] found id: ""
	I0612 21:39:40.432794   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.432804   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:40.432811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:40.432883   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:40.472347   80762 cri.go:89] found id: ""
	I0612 21:39:40.472386   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.472406   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:40.472414   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:40.472477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:40.508414   80762 cri.go:89] found id: ""
	I0612 21:39:40.508445   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.508456   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:40.508464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:40.508521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:40.546938   80762 cri.go:89] found id: ""
	I0612 21:39:40.546964   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.546972   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:40.546981   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:40.546993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:40.621356   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:40.621380   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:40.621398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:40.703830   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:40.703865   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:40.744915   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:40.744965   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:40.798883   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:40.798920   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:43.315905   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:43.330150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:43.330221   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:43.377307   80762 cri.go:89] found id: ""
	I0612 21:39:43.377337   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.377347   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:43.377362   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:43.377426   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:43.412608   80762 cri.go:89] found id: ""
	I0612 21:39:43.412638   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.412648   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:43.412654   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:43.412718   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:43.446716   80762 cri.go:89] found id: ""
	I0612 21:39:43.446746   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.446755   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:43.446762   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:43.446823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:43.484607   80762 cri.go:89] found id: ""
	I0612 21:39:43.484636   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.484647   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:43.484655   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:43.484700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:43.522400   80762 cri.go:89] found id: ""
	I0612 21:39:43.522427   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.522438   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:43.522445   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:43.522529   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:43.559121   80762 cri.go:89] found id: ""
	I0612 21:39:43.559147   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.559163   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:43.559211   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:43.559292   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:43.595886   80762 cri.go:89] found id: ""
	I0612 21:39:43.595919   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.595937   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:43.595945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:43.596011   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:43.638549   80762 cri.go:89] found id: ""
	I0612 21:39:43.638573   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.638583   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:43.638594   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:43.638609   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:43.705300   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:43.705338   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:43.723246   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:43.723281   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:43.807735   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:43.807760   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:43.807870   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:43.882971   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:43.883017   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.421476   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:46.434447   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:46.434532   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:46.470710   80762 cri.go:89] found id: ""
	I0612 21:39:46.470745   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.470758   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:46.470765   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:46.470828   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:46.504843   80762 cri.go:89] found id: ""
	I0612 21:39:46.504871   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.504878   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:46.504884   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:46.504941   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:46.542937   80762 cri.go:89] found id: ""
	I0612 21:39:46.542965   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.542973   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:46.542979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:46.543035   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:46.581098   80762 cri.go:89] found id: ""
	I0612 21:39:46.581124   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.581133   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:46.581143   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:46.581189   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:46.617289   80762 cri.go:89] found id: ""
	I0612 21:39:46.617319   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.617329   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:46.617337   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:46.617402   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:46.651012   80762 cri.go:89] found id: ""
	I0612 21:39:46.651045   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.651057   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:46.651070   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:46.651141   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:46.688344   80762 cri.go:89] found id: ""
	I0612 21:39:46.688370   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.688379   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:46.688388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:46.688451   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:46.724349   80762 cri.go:89] found id: ""
	I0612 21:39:46.724374   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.724382   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:46.724390   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:46.724404   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:46.797866   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:46.797894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:46.797912   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:46.887520   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:46.887557   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.928143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:46.928182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:46.981416   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:46.981451   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:49.497028   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:49.510077   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:49.510147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:49.544313   80762 cri.go:89] found id: ""
	I0612 21:39:49.544349   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.544359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:49.544365   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:49.544416   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:49.580220   80762 cri.go:89] found id: ""
	I0612 21:39:49.580248   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.580256   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:49.580262   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:49.580316   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:49.619582   80762 cri.go:89] found id: ""
	I0612 21:39:49.619607   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.619615   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:49.619620   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:49.619692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:49.656453   80762 cri.go:89] found id: ""
	I0612 21:39:49.656479   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.656487   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:49.656493   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:49.656557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:49.694285   80762 cri.go:89] found id: ""
	I0612 21:39:49.694318   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.694330   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:49.694338   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:49.694417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:49.731100   80762 cri.go:89] found id: ""
	I0612 21:39:49.731127   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.731135   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:49.731140   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:49.731209   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:49.767709   80762 cri.go:89] found id: ""
	I0612 21:39:49.767731   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.767738   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:49.767744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:49.767787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:49.801231   80762 cri.go:89] found id: ""
	I0612 21:39:49.801265   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.801283   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:49.801294   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:49.801309   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:49.848500   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:49.848542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:49.900084   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:49.900121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:49.916208   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:49.916234   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:49.983283   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:49.983310   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:49.983325   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:52.566884   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:52.580400   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:52.580476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:52.615922   80762 cri.go:89] found id: ""
	I0612 21:39:52.615957   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.615970   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:52.615978   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:52.616038   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:52.657316   80762 cri.go:89] found id: ""
	I0612 21:39:52.657348   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.657356   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:52.657362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:52.657417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:52.692426   80762 cri.go:89] found id: ""
	I0612 21:39:52.692459   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.692470   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:52.692478   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:52.692542   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:52.726800   80762 cri.go:89] found id: ""
	I0612 21:39:52.726835   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.726848   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:52.726856   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:52.726921   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:52.764283   80762 cri.go:89] found id: ""
	I0612 21:39:52.764314   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.764326   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:52.764341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:52.764395   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:52.802279   80762 cri.go:89] found id: ""
	I0612 21:39:52.802311   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.802324   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:52.802331   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:52.802385   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:52.841433   80762 cri.go:89] found id: ""
	I0612 21:39:52.841466   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.841477   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:52.841484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:52.841546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:52.881417   80762 cri.go:89] found id: ""
	I0612 21:39:52.881441   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.881449   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:52.881457   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:52.881468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:52.936228   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:52.936262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:52.950688   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:52.950718   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:53.025101   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:53.025122   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:53.025138   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:53.114986   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:53.115031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:55.653893   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:55.668983   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:55.669047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:55.708445   80762 cri.go:89] found id: ""
	I0612 21:39:55.708475   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.708486   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:55.708494   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:55.708558   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:55.745158   80762 cri.go:89] found id: ""
	I0612 21:39:55.745185   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.745195   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:55.745204   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:55.745270   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:55.785322   80762 cri.go:89] found id: ""
	I0612 21:39:55.785344   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.785363   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:55.785370   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:55.785442   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:55.822371   80762 cri.go:89] found id: ""
	I0612 21:39:55.822397   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.822408   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:55.822416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:55.822484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:55.856866   80762 cri.go:89] found id: ""
	I0612 21:39:55.856888   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.856895   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:55.856900   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:55.856954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:55.891618   80762 cri.go:89] found id: ""
	I0612 21:39:55.891648   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.891660   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:55.891668   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:55.891731   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:55.927483   80762 cri.go:89] found id: ""
	I0612 21:39:55.927504   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.927513   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:55.927519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:55.927572   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:55.963546   80762 cri.go:89] found id: ""
	I0612 21:39:55.963572   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.963584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:55.963597   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:55.963616   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:56.037421   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:56.037442   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:56.037453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:56.112148   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:56.112185   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:56.163359   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:56.163389   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:56.217109   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:56.217144   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:58.733278   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:58.746890   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:58.746951   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:58.785222   80762 cri.go:89] found id: ""
	I0612 21:39:58.785252   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.785263   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:58.785269   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:58.785343   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:58.824421   80762 cri.go:89] found id: ""
	I0612 21:39:58.824448   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.824455   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:58.824461   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:58.824521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:58.863626   80762 cri.go:89] found id: ""
	I0612 21:39:58.863658   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.863669   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:58.863728   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:58.863818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:58.904040   80762 cri.go:89] found id: ""
	I0612 21:39:58.904064   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.904073   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:58.904080   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:58.904147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:58.937508   80762 cri.go:89] found id: ""
	I0612 21:39:58.937543   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.937557   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:58.937565   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:58.937632   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:58.974283   80762 cri.go:89] found id: ""
	I0612 21:39:58.974311   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.974322   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:58.974330   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:58.974383   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:59.009954   80762 cri.go:89] found id: ""
	I0612 21:39:59.009987   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.009999   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:59.010007   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:59.010072   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:59.051911   80762 cri.go:89] found id: ""
	I0612 21:39:59.051935   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.051943   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:59.051951   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:59.051961   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:59.102911   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:59.102942   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:59.116576   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:59.116608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:59.189590   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:59.189619   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:59.189634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:59.270192   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:59.270232   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.820872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:01.834916   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:01.835000   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:01.870526   80762 cri.go:89] found id: ""
	I0612 21:40:01.870560   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.870572   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:01.870579   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:01.870642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:01.909581   80762 cri.go:89] found id: ""
	I0612 21:40:01.909614   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.909626   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:01.909633   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:01.909727   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:01.947944   80762 cri.go:89] found id: ""
	I0612 21:40:01.947976   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.947988   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:01.947995   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:01.948059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:01.985745   80762 cri.go:89] found id: ""
	I0612 21:40:01.985781   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.985793   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:01.985800   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:01.985860   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:02.023716   80762 cri.go:89] found id: ""
	I0612 21:40:02.023741   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.023749   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:02.023754   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:02.023801   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:02.059136   80762 cri.go:89] found id: ""
	I0612 21:40:02.059168   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.059203   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:02.059212   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:02.059283   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:02.104520   80762 cri.go:89] found id: ""
	I0612 21:40:02.104544   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.104552   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:02.104558   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:02.104618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:02.146130   80762 cri.go:89] found id: ""
	I0612 21:40:02.146164   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.146176   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:02.146187   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:02.146202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:02.199672   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:02.199710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:02.215224   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:02.215256   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:02.290030   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:02.290057   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:02.290072   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:02.374579   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:02.374615   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:04.915345   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:04.928323   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:04.928404   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:04.963267   80762 cri.go:89] found id: ""
	I0612 21:40:04.963297   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.963310   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:04.963319   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:04.963386   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:04.998378   80762 cri.go:89] found id: ""
	I0612 21:40:04.998409   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.998420   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:04.998426   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:04.998498   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:05.038094   80762 cri.go:89] found id: ""
	I0612 21:40:05.038118   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.038126   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:05.038132   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:05.038181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:05.074331   80762 cri.go:89] found id: ""
	I0612 21:40:05.074366   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.074379   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:05.074386   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:05.074462   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:05.109332   80762 cri.go:89] found id: ""
	I0612 21:40:05.109359   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.109368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:05.109373   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:05.109423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:05.143875   80762 cri.go:89] found id: ""
	I0612 21:40:05.143908   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.143918   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:05.143926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:05.143990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:05.183695   80762 cri.go:89] found id: ""
	I0612 21:40:05.183724   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.183731   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:05.183737   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:05.183792   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:05.222852   80762 cri.go:89] found id: ""
	I0612 21:40:05.222878   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.222887   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:05.222895   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:05.222907   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:05.262661   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:05.262687   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:05.315563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:05.315593   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:05.332128   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:05.332163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:05.411675   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:05.411699   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:05.411712   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:07.991930   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:08.005743   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:08.005807   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:08.041685   80762 cri.go:89] found id: ""
	I0612 21:40:08.041714   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.041724   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:08.041732   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:08.041791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:08.080875   80762 cri.go:89] found id: ""
	I0612 21:40:08.080905   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.080916   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:08.080925   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:08.080993   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:08.117290   80762 cri.go:89] found id: ""
	I0612 21:40:08.117316   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.117323   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:08.117329   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:08.117387   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:08.154345   80762 cri.go:89] found id: ""
	I0612 21:40:08.154376   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.154387   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:08.154395   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:08.154459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:08.192913   80762 cri.go:89] found id: ""
	I0612 21:40:08.192947   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.192957   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:08.192969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:08.193033   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:08.235732   80762 cri.go:89] found id: ""
	I0612 21:40:08.235764   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.235775   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:08.235782   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:08.235853   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:08.274282   80762 cri.go:89] found id: ""
	I0612 21:40:08.274306   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.274314   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:08.274320   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:08.274366   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:08.314585   80762 cri.go:89] found id: ""
	I0612 21:40:08.314608   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.314619   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:08.314628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:08.314641   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:08.331693   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:08.331725   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:08.414541   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:08.414565   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:08.414584   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:08.496428   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:08.496460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:08.546991   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:08.547020   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.099778   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:11.113450   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:11.113539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:11.150426   80762 cri.go:89] found id: ""
	I0612 21:40:11.150451   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.150459   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:11.150464   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:11.150524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:11.189931   80762 cri.go:89] found id: ""
	I0612 21:40:11.189958   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.189967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:11.189972   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:11.190031   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:11.228116   80762 cri.go:89] found id: ""
	I0612 21:40:11.228144   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.228154   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:11.228161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:11.228243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:11.268639   80762 cri.go:89] found id: ""
	I0612 21:40:11.268664   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.268672   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:11.268678   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:11.268723   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:11.306077   80762 cri.go:89] found id: ""
	I0612 21:40:11.306105   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.306116   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:11.306123   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:11.306187   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:11.344360   80762 cri.go:89] found id: ""
	I0612 21:40:11.344388   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.344399   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:11.344418   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:11.344475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:11.382906   80762 cri.go:89] found id: ""
	I0612 21:40:11.382937   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.382948   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:11.382957   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:11.383027   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:11.418388   80762 cri.go:89] found id: ""
	I0612 21:40:11.418419   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.418429   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:11.418439   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:11.418453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:11.432204   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:11.432241   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:11.508219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:11.508251   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:11.508263   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:11.593021   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:11.593058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:11.634056   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:11.634087   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:14.187831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:14.203153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:14.203248   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:14.239693   80762 cri.go:89] found id: ""
	I0612 21:40:14.239716   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.239723   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:14.239729   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:14.239827   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:14.273206   80762 cri.go:89] found id: ""
	I0612 21:40:14.273234   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.273244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:14.273251   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:14.273313   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:14.315512   80762 cri.go:89] found id: ""
	I0612 21:40:14.315592   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.315610   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:14.315618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:14.315679   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:14.352454   80762 cri.go:89] found id: ""
	I0612 21:40:14.352483   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.352496   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:14.352504   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:14.352554   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:14.387845   80762 cri.go:89] found id: ""
	I0612 21:40:14.387872   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.387880   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:14.387886   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:14.387935   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:14.423220   80762 cri.go:89] found id: ""
	I0612 21:40:14.423245   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.423254   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:14.423259   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:14.423322   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:14.457744   80762 cri.go:89] found id: ""
	I0612 21:40:14.457772   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.457784   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:14.457791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:14.457849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:14.493580   80762 cri.go:89] found id: ""
	I0612 21:40:14.493611   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.493622   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:14.493633   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:14.493669   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:14.566867   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:14.566894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:14.566913   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:14.645916   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:14.645959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:14.690232   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:14.690262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:14.741532   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:14.741576   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.257886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:17.271841   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:17.271910   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:17.309628   80762 cri.go:89] found id: ""
	I0612 21:40:17.309654   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.309667   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:17.309675   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:17.309746   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:17.346671   80762 cri.go:89] found id: ""
	I0612 21:40:17.346752   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.346769   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:17.346777   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:17.346842   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:17.381145   80762 cri.go:89] found id: ""
	I0612 21:40:17.381169   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.381177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:17.381184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:17.381241   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:17.417159   80762 cri.go:89] found id: ""
	I0612 21:40:17.417179   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.417187   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:17.417194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:17.417254   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:17.453189   80762 cri.go:89] found id: ""
	I0612 21:40:17.453213   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.453220   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:17.453226   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:17.453284   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:17.510988   80762 cri.go:89] found id: ""
	I0612 21:40:17.511012   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.511019   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:17.511026   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:17.511083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:17.548141   80762 cri.go:89] found id: ""
	I0612 21:40:17.548166   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.548176   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:17.548182   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:17.548243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:17.584591   80762 cri.go:89] found id: ""
	I0612 21:40:17.584619   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.584627   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:17.584637   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:17.584647   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:17.628627   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:17.628662   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:17.682792   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:17.682823   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.697921   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:17.697959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:17.770591   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:17.770617   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:17.770633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:20.350181   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:20.363671   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:20.363743   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:20.399858   80762 cri.go:89] found id: ""
	I0612 21:40:20.399889   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.399896   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:20.399903   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:20.399963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:20.437715   80762 cri.go:89] found id: ""
	I0612 21:40:20.437755   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.437766   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:20.437776   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:20.437843   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:20.472525   80762 cri.go:89] found id: ""
	I0612 21:40:20.472558   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.472573   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:20.472582   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:20.472642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:20.507923   80762 cri.go:89] found id: ""
	I0612 21:40:20.507948   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.507959   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:20.507966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:20.508029   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:20.545471   80762 cri.go:89] found id: ""
	I0612 21:40:20.545502   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.545512   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:20.545519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:20.545586   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:20.583793   80762 cri.go:89] found id: ""
	I0612 21:40:20.583829   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.583839   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:20.583846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:20.583912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:20.624399   80762 cri.go:89] found id: ""
	I0612 21:40:20.624438   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.624449   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:20.624467   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:20.624530   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:20.665158   80762 cri.go:89] found id: ""
	I0612 21:40:20.665184   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.665194   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:20.665203   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:20.665217   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:20.743062   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:20.743101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:20.792573   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:20.792613   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:20.847998   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:20.848033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:20.863447   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:20.863497   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:20.938020   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:23.438289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:23.453792   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:23.453855   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:23.494044   80762 cri.go:89] found id: ""
	I0612 21:40:23.494070   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.494077   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:23.494083   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:23.494144   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:23.533278   80762 cri.go:89] found id: ""
	I0612 21:40:23.533305   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.533313   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:23.533319   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:23.533380   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:23.568504   80762 cri.go:89] found id: ""
	I0612 21:40:23.568538   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.568549   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:23.568556   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:23.568619   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:23.610596   80762 cri.go:89] found id: ""
	I0612 21:40:23.610624   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.610633   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:23.610638   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:23.610690   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:23.651856   80762 cri.go:89] found id: ""
	I0612 21:40:23.651886   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.651896   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:23.651903   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:23.651978   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:23.690989   80762 cri.go:89] found id: ""
	I0612 21:40:23.691020   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.691030   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:23.691036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:23.691089   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:23.730417   80762 cri.go:89] found id: ""
	I0612 21:40:23.730454   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.730467   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:23.730476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:23.730538   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:23.773887   80762 cri.go:89] found id: ""
	I0612 21:40:23.773913   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.773921   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:23.773932   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:23.773947   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:23.825771   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:23.825805   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:23.840136   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:23.840163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:23.933645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:23.933670   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:23.933686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:24.020205   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:24.020243   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.566746   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:26.579557   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:26.579612   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:26.614721   80762 cri.go:89] found id: ""
	I0612 21:40:26.614749   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.614757   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:26.614763   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:26.614815   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:26.651398   80762 cri.go:89] found id: ""
	I0612 21:40:26.651427   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.651437   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:26.651445   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:26.651506   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:26.688217   80762 cri.go:89] found id: ""
	I0612 21:40:26.688249   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.688261   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:26.688268   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:26.688333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:26.721316   80762 cri.go:89] found id: ""
	I0612 21:40:26.721346   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.721357   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:26.721364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:26.721424   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:26.758842   80762 cri.go:89] found id: ""
	I0612 21:40:26.758868   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.758878   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:26.758885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:26.758957   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:26.795696   80762 cri.go:89] found id: ""
	I0612 21:40:26.795725   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.795733   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:26.795738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:26.795788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:26.834903   80762 cri.go:89] found id: ""
	I0612 21:40:26.834932   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.834941   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:26.834947   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:26.835020   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:26.872751   80762 cri.go:89] found id: ""
	I0612 21:40:26.872788   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.872796   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:26.872805   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:26.872817   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:26.952401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:26.952440   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.990548   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:26.990583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:27.042973   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:27.043029   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:27.058348   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:27.058379   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:27.133047   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:29.634105   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:29.654113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:29.654171   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:29.700138   80762 cri.go:89] found id: ""
	I0612 21:40:29.700169   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.700179   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:29.700188   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:29.700260   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:29.751599   80762 cri.go:89] found id: ""
	I0612 21:40:29.751628   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.751638   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:29.751646   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:29.751699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:29.801971   80762 cri.go:89] found id: ""
	I0612 21:40:29.801995   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.802003   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:29.802008   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:29.802059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:29.839381   80762 cri.go:89] found id: ""
	I0612 21:40:29.839407   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.839418   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:29.839426   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:29.839484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:29.876634   80762 cri.go:89] found id: ""
	I0612 21:40:29.876661   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.876668   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:29.876675   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:29.876721   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:29.909673   80762 cri.go:89] found id: ""
	I0612 21:40:29.909707   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.909718   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:29.909726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:29.909791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:29.947984   80762 cri.go:89] found id: ""
	I0612 21:40:29.948019   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.948029   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:29.948037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:29.948099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:29.988611   80762 cri.go:89] found id: ""
	I0612 21:40:29.988639   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.988650   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:29.988660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:29.988675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:30.073180   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:30.073216   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:30.114703   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:30.114732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:30.173242   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:30.173278   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:30.189081   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:30.189112   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:30.263564   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:32.763967   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:32.776738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:32.776808   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:32.813088   80762 cri.go:89] found id: ""
	I0612 21:40:32.813115   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.813125   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:32.813132   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:32.813195   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:32.850960   80762 cri.go:89] found id: ""
	I0612 21:40:32.850987   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.850996   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:32.851004   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:32.851065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:32.887229   80762 cri.go:89] found id: ""
	I0612 21:40:32.887259   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.887270   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:32.887277   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:32.887346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:32.923123   80762 cri.go:89] found id: ""
	I0612 21:40:32.923148   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.923158   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:32.923164   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:32.923242   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:32.962603   80762 cri.go:89] found id: ""
	I0612 21:40:32.962628   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.962638   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:32.962644   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:32.962695   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:32.998971   80762 cri.go:89] found id: ""
	I0612 21:40:32.999025   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.999037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:32.999046   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:32.999120   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:33.037640   80762 cri.go:89] found id: ""
	I0612 21:40:33.037670   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.037680   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:33.037686   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:33.037748   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:33.073758   80762 cri.go:89] found id: ""
	I0612 21:40:33.073787   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.073794   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:33.073804   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:33.073815   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:33.124478   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:33.124512   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:33.139010   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:33.139036   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:33.207693   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:33.207716   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:33.207732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:33.287710   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:33.287746   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:35.831654   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:35.845783   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:35.845845   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:35.882097   80762 cri.go:89] found id: ""
	I0612 21:40:35.882129   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.882141   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:35.882149   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:35.882205   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:35.920931   80762 cri.go:89] found id: ""
	I0612 21:40:35.920972   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.920980   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:35.920985   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:35.921061   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:35.958689   80762 cri.go:89] found id: ""
	I0612 21:40:35.958712   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.958721   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:35.958726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:35.958774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:35.994973   80762 cri.go:89] found id: ""
	I0612 21:40:35.995028   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.995040   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:35.995048   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:35.995114   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:36.035679   80762 cri.go:89] found id: ""
	I0612 21:40:36.035707   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.035715   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:36.035721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:36.035768   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:36.071498   80762 cri.go:89] found id: ""
	I0612 21:40:36.071525   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.071534   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:36.071544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:36.071594   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:36.107367   80762 cri.go:89] found id: ""
	I0612 21:40:36.107397   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.107406   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:36.107413   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:36.107466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:36.148668   80762 cri.go:89] found id: ""
	I0612 21:40:36.148699   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.148710   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:36.148721   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:36.148736   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:36.207719   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:36.207765   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:36.223129   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:36.223158   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:36.290786   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:36.290809   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:36.290822   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:36.375361   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:36.375398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:38.921100   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:38.935420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:38.935491   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:38.970519   80762 cri.go:89] found id: ""
	I0612 21:40:38.970548   80762 logs.go:276] 0 containers: []
	W0612 21:40:38.970559   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:38.970567   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:38.970639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:39.005866   80762 cri.go:89] found id: ""
	I0612 21:40:39.005888   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.005896   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:39.005902   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:39.005954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:39.043619   80762 cri.go:89] found id: ""
	I0612 21:40:39.043647   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.043655   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:39.043661   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:39.043709   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:39.081311   80762 cri.go:89] found id: ""
	I0612 21:40:39.081336   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.081344   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:39.081350   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:39.081410   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:39.117326   80762 cri.go:89] found id: ""
	I0612 21:40:39.117358   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.117367   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:39.117372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:39.117423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:39.151785   80762 cri.go:89] found id: ""
	I0612 21:40:39.151819   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.151828   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:39.151835   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:39.151899   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:39.187031   80762 cri.go:89] found id: ""
	I0612 21:40:39.187057   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.187065   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:39.187071   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:39.187119   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:39.222186   80762 cri.go:89] found id: ""
	I0612 21:40:39.222212   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.222223   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:39.222233   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:39.222245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:39.276126   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:39.276164   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:39.291631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:39.291658   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:39.365615   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:39.365641   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:39.365659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:39.442548   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:39.442600   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:41.980840   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:41.996629   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:41.996686   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:42.034158   80762 cri.go:89] found id: ""
	I0612 21:40:42.034186   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.034195   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:42.034202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:42.034274   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:42.070981   80762 cri.go:89] found id: ""
	I0612 21:40:42.071011   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.071021   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:42.071028   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:42.071093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:42.108282   80762 cri.go:89] found id: ""
	I0612 21:40:42.108309   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.108316   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:42.108322   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:42.108369   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:42.146394   80762 cri.go:89] found id: ""
	I0612 21:40:42.146423   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.146434   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:42.146454   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:42.146539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:42.183577   80762 cri.go:89] found id: ""
	I0612 21:40:42.183601   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.183608   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:42.183614   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:42.183662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:42.222069   80762 cri.go:89] found id: ""
	I0612 21:40:42.222100   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.222109   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:42.222115   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:42.222168   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:42.259128   80762 cri.go:89] found id: ""
	I0612 21:40:42.259155   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.259164   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:42.259192   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:42.259282   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:42.296321   80762 cri.go:89] found id: ""
	I0612 21:40:42.296354   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.296368   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:42.296380   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:42.296400   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:42.311098   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:42.311137   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:42.386116   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:42.386144   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:42.386163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:42.467016   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:42.467054   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:42.509143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:42.509180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:45.062872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:45.076570   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:45.076658   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:45.114362   80762 cri.go:89] found id: ""
	I0612 21:40:45.114394   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.114404   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:45.114412   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:45.114478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:45.151577   80762 cri.go:89] found id: ""
	I0612 21:40:45.151609   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.151620   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:45.151627   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:45.151689   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:45.188753   80762 cri.go:89] found id: ""
	I0612 21:40:45.188785   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.188795   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:45.188802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:45.188861   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:45.224775   80762 cri.go:89] found id: ""
	I0612 21:40:45.224801   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.224808   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:45.224814   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:45.224873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:45.260440   80762 cri.go:89] found id: ""
	I0612 21:40:45.260472   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.260483   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:45.260490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:45.260547   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:45.297662   80762 cri.go:89] found id: ""
	I0612 21:40:45.297697   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.297709   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:45.297716   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:45.297774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:45.335637   80762 cri.go:89] found id: ""
	I0612 21:40:45.335669   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.335682   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:45.335690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:45.335753   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:45.371523   80762 cri.go:89] found id: ""
	I0612 21:40:45.371580   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.371590   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:45.371599   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:45.371610   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:45.424029   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:45.424065   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:45.440339   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:45.440378   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:45.509504   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:45.509526   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:45.509541   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:45.591857   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:45.591893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:48.135912   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:48.151271   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:48.151331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:48.192740   80762 cri.go:89] found id: ""
	I0612 21:40:48.192775   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.192788   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:48.192798   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:48.192875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:48.230440   80762 cri.go:89] found id: ""
	I0612 21:40:48.230469   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.230479   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:48.230487   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:48.230549   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:48.270892   80762 cri.go:89] found id: ""
	I0612 21:40:48.270922   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.270933   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:48.270941   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:48.270996   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:48.308555   80762 cri.go:89] found id: ""
	I0612 21:40:48.308580   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.308588   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:48.308594   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:48.308640   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:48.342705   80762 cri.go:89] found id: ""
	I0612 21:40:48.342727   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.342735   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:48.342741   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:48.342788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:48.377418   80762 cri.go:89] found id: ""
	I0612 21:40:48.377450   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.377461   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:48.377468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:48.377535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:48.413092   80762 cri.go:89] found id: ""
	I0612 21:40:48.413126   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.413141   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:48.413149   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:48.413215   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:48.447673   80762 cri.go:89] found id: ""
	I0612 21:40:48.447699   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.447708   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:48.447716   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:48.447728   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:48.488508   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:48.488542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:48.540573   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:48.540608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:48.554735   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:48.554762   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:48.632074   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:48.632098   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:48.632117   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.212336   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:51.227428   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:51.227493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:51.268124   80762 cri.go:89] found id: ""
	I0612 21:40:51.268157   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.268167   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:51.268172   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:51.268220   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:51.305751   80762 cri.go:89] found id: ""
	I0612 21:40:51.305777   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.305785   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:51.305793   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:51.305849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:51.347292   80762 cri.go:89] found id: ""
	I0612 21:40:51.347318   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.347325   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:51.347332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:51.347394   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:51.387476   80762 cri.go:89] found id: ""
	I0612 21:40:51.387501   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.387509   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:51.387515   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:51.387573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:51.431992   80762 cri.go:89] found id: ""
	I0612 21:40:51.432019   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.432029   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:51.432036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:51.432096   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:51.477204   80762 cri.go:89] found id: ""
	I0612 21:40:51.477235   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.477246   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:51.477254   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:51.477346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:51.518449   80762 cri.go:89] found id: ""
	I0612 21:40:51.518477   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.518488   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:51.518502   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:51.518562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:51.554991   80762 cri.go:89] found id: ""
	I0612 21:40:51.555015   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.555024   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:51.555033   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:51.555046   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:51.606732   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:51.606769   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:51.620512   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:51.620538   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:51.697029   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:51.697058   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:51.697074   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.775401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:51.775437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:54.318059   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:54.331420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:54.331509   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:54.367886   80762 cri.go:89] found id: ""
	I0612 21:40:54.367926   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.367948   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:54.367959   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:54.368047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:54.403998   80762 cri.go:89] found id: ""
	I0612 21:40:54.404023   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.404034   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:54.404041   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:54.404108   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:54.441449   80762 cri.go:89] found id: ""
	I0612 21:40:54.441480   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.441491   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:54.441498   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:54.441557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:54.476459   80762 cri.go:89] found id: ""
	I0612 21:40:54.476490   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.476500   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:54.476508   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:54.476573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:54.515337   80762 cri.go:89] found id: ""
	I0612 21:40:54.515360   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.515368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:54.515374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:54.515423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:54.551447   80762 cri.go:89] found id: ""
	I0612 21:40:54.551468   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.551475   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:54.551481   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:54.551528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:54.587082   80762 cri.go:89] found id: ""
	I0612 21:40:54.587114   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.587125   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:54.587145   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:54.587225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:54.624211   80762 cri.go:89] found id: ""
	I0612 21:40:54.624235   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.624257   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:54.624268   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:54.624282   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:54.677816   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:54.677848   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:54.693725   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:54.693749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:54.772229   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:54.772255   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:54.772273   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:54.852543   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:54.852578   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:57.397722   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:57.411082   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:57.411145   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:57.449633   80762 cri.go:89] found id: ""
	I0612 21:40:57.449662   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.449673   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:57.449680   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:57.449745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:57.489855   80762 cri.go:89] found id: ""
	I0612 21:40:57.489880   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.489889   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:57.489894   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:57.489952   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:57.528986   80762 cri.go:89] found id: ""
	I0612 21:40:57.529006   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.529014   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:57.529019   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:57.529081   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:57.566701   80762 cri.go:89] found id: ""
	I0612 21:40:57.566730   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.566739   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:57.566746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:57.566800   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:57.601114   80762 cri.go:89] found id: ""
	I0612 21:40:57.601137   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.601145   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:57.601151   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:57.601212   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:57.636120   80762 cri.go:89] found id: ""
	I0612 21:40:57.636145   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.636155   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:57.636163   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:57.636225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:57.676912   80762 cri.go:89] found id: ""
	I0612 21:40:57.676953   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.676960   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:57.676966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:57.677039   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:57.714671   80762 cri.go:89] found id: ""
	I0612 21:40:57.714691   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.714699   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:57.714707   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:57.714720   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:57.770550   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:57.770583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:57.785062   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:57.785093   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:57.853448   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:57.853468   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:57.853480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:57.939957   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:57.939999   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.493469   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:00.509746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:00.509819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:00.546582   80762 cri.go:89] found id: ""
	I0612 21:41:00.546610   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.546620   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:00.546629   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:00.546683   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:00.584229   80762 cri.go:89] found id: ""
	I0612 21:41:00.584256   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.584264   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:00.584269   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:00.584337   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:00.618679   80762 cri.go:89] found id: ""
	I0612 21:41:00.618704   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.618712   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:00.618719   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:00.618778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:00.656336   80762 cri.go:89] found id: ""
	I0612 21:41:00.656364   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.656375   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:00.656384   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:00.656457   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:00.694147   80762 cri.go:89] found id: ""
	I0612 21:41:00.694173   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.694182   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:00.694187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:00.694236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:00.733964   80762 cri.go:89] found id: ""
	I0612 21:41:00.733994   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.734006   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:00.734014   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:00.734076   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:00.771245   80762 cri.go:89] found id: ""
	I0612 21:41:00.771274   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.771287   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:00.771293   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:00.771357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:00.809118   80762 cri.go:89] found id: ""
	I0612 21:41:00.809150   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.809158   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:00.809168   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:00.809188   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:00.863479   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:00.863514   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:00.878749   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:00.878783   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:00.955800   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:00.955825   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:00.955844   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:01.037587   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:01.037618   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:03.583063   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:03.597656   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:03.597732   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:03.633233   80762 cri.go:89] found id: ""
	I0612 21:41:03.633263   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.633283   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:03.633291   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:03.633357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:03.679900   80762 cri.go:89] found id: ""
	I0612 21:41:03.679930   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.679941   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:03.679948   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:03.680018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:03.718766   80762 cri.go:89] found id: ""
	I0612 21:41:03.718792   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.718800   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:03.718811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:03.718868   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:03.759404   80762 cri.go:89] found id: ""
	I0612 21:41:03.759429   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.759437   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:03.759443   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:03.759496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:03.794313   80762 cri.go:89] found id: ""
	I0612 21:41:03.794348   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.794357   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:03.794364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:03.794430   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:03.832525   80762 cri.go:89] found id: ""
	I0612 21:41:03.832546   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.832554   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:03.832559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:03.832607   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:03.872841   80762 cri.go:89] found id: ""
	I0612 21:41:03.872868   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.872878   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:03.872885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:03.872945   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:03.912736   80762 cri.go:89] found id: ""
	I0612 21:41:03.912760   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.912770   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:03.912781   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:03.912794   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:03.986645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:03.986672   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:03.986688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:04.066766   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:04.066799   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:04.108219   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:04.108250   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:04.168866   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:04.168911   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:06.684232   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:06.698359   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:06.698443   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:06.735324   80762 cri.go:89] found id: ""
	I0612 21:41:06.735350   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.735359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:06.735364   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:06.735418   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:06.771763   80762 cri.go:89] found id: ""
	I0612 21:41:06.771786   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.771794   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:06.771799   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:06.771850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:06.808151   80762 cri.go:89] found id: ""
	I0612 21:41:06.808179   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.808188   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:06.808193   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:06.808263   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:06.846099   80762 cri.go:89] found id: ""
	I0612 21:41:06.846121   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.846129   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:06.846134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:06.846182   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:06.883559   80762 cri.go:89] found id: ""
	I0612 21:41:06.883584   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.883591   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:06.883597   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:06.883645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:06.920799   80762 cri.go:89] found id: ""
	I0612 21:41:06.920830   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.920841   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:06.920849   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:06.920914   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:06.964441   80762 cri.go:89] found id: ""
	I0612 21:41:06.964472   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.964482   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:06.964490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:06.964561   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:07.000866   80762 cri.go:89] found id: ""
	I0612 21:41:07.000901   80762 logs.go:276] 0 containers: []
	W0612 21:41:07.000912   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:07.000924   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:07.000993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:07.017074   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:07.017121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:07.093873   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:07.093901   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:07.093925   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:07.171258   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:07.171293   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:07.212588   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:07.212624   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:09.767332   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:09.781184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:09.781249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:09.818966   80762 cri.go:89] found id: ""
	I0612 21:41:09.818999   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.819008   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:09.819014   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:09.819064   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:09.854714   80762 cri.go:89] found id: ""
	I0612 21:41:09.854742   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.854760   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:09.854772   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:09.854823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:09.891229   80762 cri.go:89] found id: ""
	I0612 21:41:09.891257   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.891268   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:09.891274   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:09.891335   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:09.928569   80762 cri.go:89] found id: ""
	I0612 21:41:09.928598   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.928610   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:09.928617   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:09.928680   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:09.963681   80762 cri.go:89] found id: ""
	I0612 21:41:09.963714   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.963725   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:09.963733   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:09.963819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:10.002340   80762 cri.go:89] found id: ""
	I0612 21:41:10.002368   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.002380   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:10.002388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:10.002454   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:10.041935   80762 cri.go:89] found id: ""
	I0612 21:41:10.041961   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.041972   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:10.041979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:10.042047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:10.080541   80762 cri.go:89] found id: ""
	I0612 21:41:10.080571   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.080584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:10.080598   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:10.080614   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:10.140904   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:10.140944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:10.176646   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:10.176688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:10.272147   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:10.272169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:10.272183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:10.352649   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:10.352686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:12.896274   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:12.911147   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:12.911231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:12.947628   80762 cri.go:89] found id: ""
	I0612 21:41:12.947651   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.947660   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:12.947665   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:12.947726   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:12.982813   80762 cri.go:89] found id: ""
	I0612 21:41:12.982837   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.982845   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:12.982851   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:12.982898   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:13.021360   80762 cri.go:89] found id: ""
	I0612 21:41:13.021403   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.021412   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:13.021417   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:13.021468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:13.063534   80762 cri.go:89] found id: ""
	I0612 21:41:13.063566   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.063576   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:13.063585   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:13.063666   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:13.098767   80762 cri.go:89] found id: ""
	I0612 21:41:13.098796   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.098807   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:13.098816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:13.098878   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:13.140764   80762 cri.go:89] found id: ""
	I0612 21:41:13.140797   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.140809   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:13.140816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:13.140882   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:13.180356   80762 cri.go:89] found id: ""
	I0612 21:41:13.180400   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.180413   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:13.180420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:13.180482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:13.215198   80762 cri.go:89] found id: ""
	I0612 21:41:13.215227   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.215238   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:13.215249   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:13.215265   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:13.273143   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:13.273182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:13.287861   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:13.287893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:13.366052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:13.366073   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:13.366099   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:13.450980   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:13.451015   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:15.991386   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:16.005618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:16.005699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:16.047253   80762 cri.go:89] found id: ""
	I0612 21:41:16.047281   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.047289   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:16.047295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:16.047356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:16.082860   80762 cri.go:89] found id: ""
	I0612 21:41:16.082886   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.082894   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:16.082899   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:16.082948   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:16.123127   80762 cri.go:89] found id: ""
	I0612 21:41:16.123152   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.123164   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:16.123187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:16.123247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:16.167155   80762 cri.go:89] found id: ""
	I0612 21:41:16.167189   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.167199   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:16.167207   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:16.167276   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:16.204036   80762 cri.go:89] found id: ""
	I0612 21:41:16.204061   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.204071   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:16.204079   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:16.204140   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:16.246672   80762 cri.go:89] found id: ""
	I0612 21:41:16.246701   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.246712   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:16.246721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:16.246785   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:16.286820   80762 cri.go:89] found id: ""
	I0612 21:41:16.286849   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.286857   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:16.286864   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:16.286919   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:16.326622   80762 cri.go:89] found id: ""
	I0612 21:41:16.326649   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.326660   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:16.326667   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:16.326678   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:16.407492   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:16.407525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:16.448207   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:16.448236   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:16.501675   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:16.501714   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:16.518173   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:16.518202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:16.592582   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:19.093054   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:19.107926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:19.108002   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:19.149386   80762 cri.go:89] found id: ""
	I0612 21:41:19.149411   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.149421   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:19.149429   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:19.149493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:19.188092   80762 cri.go:89] found id: ""
	I0612 21:41:19.188120   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.188131   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:19.188139   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:19.188201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:19.227203   80762 cri.go:89] found id: ""
	I0612 21:41:19.227229   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.227235   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:19.227242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:19.227301   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:19.269187   80762 cri.go:89] found id: ""
	I0612 21:41:19.269217   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.269225   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:19.269232   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:19.269294   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:19.305394   80762 cri.go:89] found id: ""
	I0612 21:41:19.305418   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.305425   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:19.305431   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:19.305480   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:19.347794   80762 cri.go:89] found id: ""
	I0612 21:41:19.347825   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.347837   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:19.347846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:19.347907   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:19.384314   80762 cri.go:89] found id: ""
	I0612 21:41:19.384346   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.384364   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:19.384372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:19.384428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:19.421782   80762 cri.go:89] found id: ""
	I0612 21:41:19.421811   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.421822   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:19.421834   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:19.421849   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:19.475969   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:19.476000   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:19.490683   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:19.490710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:19.574492   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:19.574513   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:19.574525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:19.662288   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:19.662324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:22.205404   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:22.220217   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:22.220297   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:22.256870   80762 cri.go:89] found id: ""
	I0612 21:41:22.256901   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.256913   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:22.256921   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:22.256984   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:22.290380   80762 cri.go:89] found id: ""
	I0612 21:41:22.290413   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.290425   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:22.290433   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:22.290497   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:22.324981   80762 cri.go:89] found id: ""
	I0612 21:41:22.325010   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.325019   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:22.325024   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:22.325093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:22.362900   80762 cri.go:89] found id: ""
	I0612 21:41:22.362926   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.362938   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:22.362946   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:22.363008   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:22.399004   80762 cri.go:89] found id: ""
	I0612 21:41:22.399037   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.399048   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:22.399056   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:22.399116   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:22.434306   80762 cri.go:89] found id: ""
	I0612 21:41:22.434341   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.434355   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:22.434365   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:22.434422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:22.479085   80762 cri.go:89] found id: ""
	I0612 21:41:22.479116   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.479129   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:22.479142   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:22.479228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:22.516730   80762 cri.go:89] found id: ""
	I0612 21:41:22.516755   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.516761   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:22.516769   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:22.516780   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:22.570921   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:22.570957   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:22.585409   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:22.585437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:22.667314   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:22.667342   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:22.667360   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:22.743865   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:22.743901   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:25.282306   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:25.297334   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:25.297407   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:25.336610   80762 cri.go:89] found id: ""
	I0612 21:41:25.336641   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.336654   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:25.336662   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:25.336729   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:25.373307   80762 cri.go:89] found id: ""
	I0612 21:41:25.373338   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.373350   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:25.373358   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:25.373425   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:25.413141   80762 cri.go:89] found id: ""
	I0612 21:41:25.413169   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.413177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:25.413183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:25.413233   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:25.450810   80762 cri.go:89] found id: ""
	I0612 21:41:25.450842   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.450853   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:25.450862   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:25.450924   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:25.487017   80762 cri.go:89] found id: ""
	I0612 21:41:25.487049   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.487255   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:25.487269   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:25.487328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:25.524335   80762 cri.go:89] found id: ""
	I0612 21:41:25.524361   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.524371   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:25.524377   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:25.524428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:25.560394   80762 cri.go:89] found id: ""
	I0612 21:41:25.560421   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.560429   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:25.560435   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:25.560482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:25.599334   80762 cri.go:89] found id: ""
	I0612 21:41:25.599362   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.599372   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:25.599384   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:25.599399   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:25.680153   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:25.680195   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:25.726336   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:25.726377   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:25.777064   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:25.777098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:25.791978   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:25.792007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:25.868860   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:28.369099   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:28.382729   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:28.382786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:28.423835   80762 cri.go:89] found id: ""
	I0612 21:41:28.423865   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.423875   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:28.423889   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:28.423953   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:28.463098   80762 cri.go:89] found id: ""
	I0612 21:41:28.463127   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.463137   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:28.463144   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:28.463223   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:28.499678   80762 cri.go:89] found id: ""
	I0612 21:41:28.499707   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.499718   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:28.499726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:28.499786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:28.536057   80762 cri.go:89] found id: ""
	I0612 21:41:28.536089   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.536101   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:28.536108   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:28.536180   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:28.571052   80762 cri.go:89] found id: ""
	I0612 21:41:28.571080   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.571090   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:28.571098   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:28.571162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:28.607320   80762 cri.go:89] found id: ""
	I0612 21:41:28.607348   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.607360   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:28.607368   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:28.607427   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:28.643000   80762 cri.go:89] found id: ""
	I0612 21:41:28.643029   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.643037   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:28.643042   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:28.643113   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:28.684134   80762 cri.go:89] found id: ""
	I0612 21:41:28.684164   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.684175   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:28.684186   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:28.684201   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:28.737059   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:28.737092   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:28.753290   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:28.753320   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:28.826964   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:28.826990   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:28.827009   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:28.908874   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:28.908919   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:31.450362   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:31.465831   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:31.465912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:31.507441   80762 cri.go:89] found id: ""
	I0612 21:41:31.507465   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.507474   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:31.507482   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:31.507546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:31.541403   80762 cri.go:89] found id: ""
	I0612 21:41:31.541437   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.541450   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:31.541458   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:31.541524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:31.576367   80762 cri.go:89] found id: ""
	I0612 21:41:31.576393   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.576405   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:31.576412   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:31.576475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:31.615053   80762 cri.go:89] found id: ""
	I0612 21:41:31.615081   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.615091   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:31.615099   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:31.615159   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:31.650460   80762 cri.go:89] found id: ""
	I0612 21:41:31.650495   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.650504   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:31.650511   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:31.650580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:31.690764   80762 cri.go:89] found id: ""
	I0612 21:41:31.690792   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.690803   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:31.690810   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:31.690870   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:31.729785   80762 cri.go:89] found id: ""
	I0612 21:41:31.729809   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.729817   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:31.729822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:31.729881   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:31.772978   80762 cri.go:89] found id: ""
	I0612 21:41:31.773005   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.773013   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:31.773023   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:31.773038   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:31.830451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:31.830484   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:31.846821   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:31.846850   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:31.927289   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:31.927328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:31.927358   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:32.004814   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:32.004852   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:34.550931   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:34.567559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:34.567618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:34.602234   80762 cri.go:89] found id: ""
	I0612 21:41:34.602260   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.602267   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:34.602273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:34.602318   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:34.639575   80762 cri.go:89] found id: ""
	I0612 21:41:34.639598   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.639605   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:34.639610   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:34.639656   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:34.681325   80762 cri.go:89] found id: ""
	I0612 21:41:34.681360   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.681368   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:34.681374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:34.681478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:34.721405   80762 cri.go:89] found id: ""
	I0612 21:41:34.721432   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.721444   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:34.721451   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:34.721517   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:34.764344   80762 cri.go:89] found id: ""
	I0612 21:41:34.764375   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.764386   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:34.764394   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:34.764459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:34.802083   80762 cri.go:89] found id: ""
	I0612 21:41:34.802107   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.802115   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:34.802121   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:34.802181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:34.843418   80762 cri.go:89] found id: ""
	I0612 21:41:34.843441   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.843450   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:34.843455   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:34.843501   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:34.877803   80762 cri.go:89] found id: ""
	I0612 21:41:34.877832   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.877842   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:34.877852   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:34.877867   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:34.930515   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:34.930545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:34.943705   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:34.943729   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:35.024912   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:35.024931   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:35.024941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:35.109129   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:35.109165   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:37.651788   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:37.667901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:37.667964   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:37.709599   80762 cri.go:89] found id: ""
	I0612 21:41:37.709627   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.709637   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:37.709645   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:37.709700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:37.747150   80762 cri.go:89] found id: ""
	I0612 21:41:37.747191   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.747204   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:37.747212   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:37.747273   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:37.785528   80762 cri.go:89] found id: ""
	I0612 21:41:37.785552   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.785560   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:37.785567   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:37.785614   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:37.822363   80762 cri.go:89] found id: ""
	I0612 21:41:37.822390   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.822400   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:37.822408   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:37.822468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:37.858285   80762 cri.go:89] found id: ""
	I0612 21:41:37.858395   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.858409   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:37.858416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:37.858466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:37.897500   80762 cri.go:89] found id: ""
	I0612 21:41:37.897542   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.897556   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:37.897574   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:37.897635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:37.937878   80762 cri.go:89] found id: ""
	I0612 21:41:37.937905   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.937921   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:37.937927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:37.937985   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:37.978282   80762 cri.go:89] found id: ""
	I0612 21:41:37.978310   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.978319   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:37.978327   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:37.978341   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:38.055864   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:38.055890   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:38.055903   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:38.135883   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:38.135918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:38.178641   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:38.178668   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:38.236635   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:38.236686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:40.759426   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:40.773526   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:40.773598   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:40.819130   80762 cri.go:89] found id: ""
	I0612 21:41:40.819161   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.819190   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:40.819202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:40.819264   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:40.856176   80762 cri.go:89] found id: ""
	I0612 21:41:40.856204   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.856216   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:40.856224   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:40.856287   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:40.896709   80762 cri.go:89] found id: ""
	I0612 21:41:40.896739   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.896750   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:40.896759   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:40.896820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:40.936431   80762 cri.go:89] found id: ""
	I0612 21:41:40.936457   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.936465   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:40.936471   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:40.936528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:40.979773   80762 cri.go:89] found id: ""
	I0612 21:41:40.979809   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.979820   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:40.979828   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:40.979892   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:41.023885   80762 cri.go:89] found id: ""
	I0612 21:41:41.023910   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.023919   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:41.023925   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:41.024004   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:41.070370   80762 cri.go:89] found id: ""
	I0612 21:41:41.070396   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.070405   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:41.070411   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:41.070467   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:41.115282   80762 cri.go:89] found id: ""
	I0612 21:41:41.115311   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.115321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:41.115332   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:41.115346   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:41.128680   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:41.128710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:41.206100   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:41.206125   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:41.206140   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:41.283499   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:41.283536   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:41.323275   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:41.323307   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:43.875750   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:43.890156   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:43.890216   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:43.935105   80762 cri.go:89] found id: ""
	I0612 21:41:43.935135   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.935147   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:43.935155   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:43.935236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:43.980929   80762 cri.go:89] found id: ""
	I0612 21:41:43.980958   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.980967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:43.980973   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:43.981051   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:44.029387   80762 cri.go:89] found id: ""
	I0612 21:41:44.029409   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.029416   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:44.029421   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:44.029483   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:44.067415   80762 cri.go:89] found id: ""
	I0612 21:41:44.067449   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.067460   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:44.067468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:44.067528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:44.105093   80762 cri.go:89] found id: ""
	I0612 21:41:44.105117   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.105125   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:44.105131   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:44.105178   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:44.142647   80762 cri.go:89] found id: ""
	I0612 21:41:44.142680   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.142691   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:44.142699   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:44.142759   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:44.182725   80762 cri.go:89] found id: ""
	I0612 21:41:44.182756   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.182767   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:44.182775   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:44.182836   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:44.219538   80762 cri.go:89] found id: ""
	I0612 21:41:44.219568   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.219580   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:44.219593   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:44.219608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:44.272234   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:44.272267   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:44.285631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:44.285663   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:44.362453   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:44.362470   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:44.362482   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:44.444624   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:44.444656   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.985731   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:46.999749   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:46.999819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:47.035051   80762 cri.go:89] found id: ""
	I0612 21:41:47.035073   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.035080   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:47.035086   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:47.035136   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:47.077929   80762 cri.go:89] found id: ""
	I0612 21:41:47.077964   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.077975   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:47.077982   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:47.078062   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:47.111621   80762 cri.go:89] found id: ""
	I0612 21:41:47.111660   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.111671   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:47.111679   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:47.111744   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:47.150700   80762 cri.go:89] found id: ""
	I0612 21:41:47.150725   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.150733   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:47.150739   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:47.150787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:47.189547   80762 cri.go:89] found id: ""
	I0612 21:41:47.189576   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.189586   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:47.189593   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:47.189660   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:47.229482   80762 cri.go:89] found id: ""
	I0612 21:41:47.229510   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.229522   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:47.229530   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:47.229599   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:47.266798   80762 cri.go:89] found id: ""
	I0612 21:41:47.266826   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.266837   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:47.266844   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:47.266906   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:47.302256   80762 cri.go:89] found id: ""
	I0612 21:41:47.302280   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.302287   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:47.302295   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:47.302306   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:47.354485   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:47.354526   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:47.368689   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:47.368713   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:47.438219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:47.438244   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:47.438257   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:47.514199   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:47.514227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:50.056394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:50.069348   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:50.069482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:50.106057   80762 cri.go:89] found id: ""
	I0612 21:41:50.106087   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.106097   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:50.106104   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:50.106162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:50.144532   80762 cri.go:89] found id: ""
	I0612 21:41:50.144560   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.144571   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:50.144579   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:50.144662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:50.184549   80762 cri.go:89] found id: ""
	I0612 21:41:50.184575   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.184583   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:50.184588   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:50.184648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:50.228520   80762 cri.go:89] found id: ""
	I0612 21:41:50.228555   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.228569   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:50.228578   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:50.228649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:50.265697   80762 cri.go:89] found id: ""
	I0612 21:41:50.265726   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.265737   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:50.265744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:50.265818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:50.301353   80762 cri.go:89] found id: ""
	I0612 21:41:50.301393   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.301410   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:50.301416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:50.301477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:50.337273   80762 cri.go:89] found id: ""
	I0612 21:41:50.337298   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.337306   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:50.337313   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:50.337374   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:50.383090   80762 cri.go:89] found id: ""
	I0612 21:41:50.383116   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.383126   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:50.383138   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:50.383151   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:50.454193   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:50.454240   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:50.477753   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:50.477779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:50.544052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:50.544075   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:50.544091   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:50.626441   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:50.626480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:53.168599   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:53.181682   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:53.181764   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:53.228060   80762 cri.go:89] found id: ""
	I0612 21:41:53.228096   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.228107   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:53.228115   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:53.228176   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:53.264867   80762 cri.go:89] found id: ""
	I0612 21:41:53.264890   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.264898   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:53.264903   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:53.264950   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:53.298351   80762 cri.go:89] found id: ""
	I0612 21:41:53.298378   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.298386   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:53.298392   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:53.298448   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:53.335888   80762 cri.go:89] found id: ""
	I0612 21:41:53.335910   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.335917   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:53.335922   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:53.335980   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:53.376131   80762 cri.go:89] found id: ""
	I0612 21:41:53.376166   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.376175   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:53.376183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:53.376240   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:53.412059   80762 cri.go:89] found id: ""
	I0612 21:41:53.412082   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.412088   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:53.412097   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:53.412142   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:53.446776   80762 cri.go:89] found id: ""
	I0612 21:41:53.446805   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.446816   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:53.446823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:53.446894   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:53.482411   80762 cri.go:89] found id: ""
	I0612 21:41:53.482433   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.482441   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:53.482449   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:53.482460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:53.522419   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:53.522448   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:53.573107   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:53.573141   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:53.587101   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:53.587147   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:53.665631   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:53.665660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:53.665675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.242482   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:56.255606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:56.255682   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:56.290837   80762 cri.go:89] found id: ""
	I0612 21:41:56.290861   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.290872   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:56.290880   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:56.290938   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:56.325429   80762 cri.go:89] found id: ""
	I0612 21:41:56.325458   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.325466   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:56.325471   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:56.325534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:56.359809   80762 cri.go:89] found id: ""
	I0612 21:41:56.359835   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.359845   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:56.359852   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:56.359912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:56.397775   80762 cri.go:89] found id: ""
	I0612 21:41:56.397803   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.397815   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:56.397823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:56.397884   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:56.433917   80762 cri.go:89] found id: ""
	I0612 21:41:56.433945   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.433956   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:56.433963   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:56.434028   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:56.467390   80762 cri.go:89] found id: ""
	I0612 21:41:56.467419   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.467429   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:56.467438   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:56.467496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:56.504014   80762 cri.go:89] found id: ""
	I0612 21:41:56.504048   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.504059   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:56.504067   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:56.504138   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:56.544157   80762 cri.go:89] found id: ""
	I0612 21:41:56.544187   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.544198   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:56.544209   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:56.544224   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:56.595431   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:56.595462   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:56.608897   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:56.608936   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:56.682706   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:56.682735   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:56.682749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.762598   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:56.762634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:59.302898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:59.317901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:59.317976   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:59.360136   80762 cri.go:89] found id: ""
	I0612 21:41:59.360164   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.360174   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:59.360181   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:59.360249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:59.397205   80762 cri.go:89] found id: ""
	I0612 21:41:59.397233   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.397244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:59.397252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:59.397312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:59.437063   80762 cri.go:89] found id: ""
	I0612 21:41:59.437093   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.437105   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:59.437113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:59.437172   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:59.472800   80762 cri.go:89] found id: ""
	I0612 21:41:59.472827   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.472835   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:59.472843   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:59.472904   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:59.509452   80762 cri.go:89] found id: ""
	I0612 21:41:59.509474   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.509482   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:59.509487   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:59.509534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:59.546121   80762 cri.go:89] found id: ""
	I0612 21:41:59.546151   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.546162   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:59.546170   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:59.546231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:59.582983   80762 cri.go:89] found id: ""
	I0612 21:41:59.583007   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.583014   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:59.583020   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:59.583065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:59.621110   80762 cri.go:89] found id: ""
	I0612 21:41:59.621148   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.621160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:59.621171   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:59.621187   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:59.673113   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:59.673143   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:59.688106   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:59.688171   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:59.767653   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:59.767678   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:59.767692   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:59.848467   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:59.848507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.391324   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:02.406543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:02.406621   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:02.442225   80762 cri.go:89] found id: ""
	I0612 21:42:02.442255   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.442265   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:02.442273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:02.442341   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:02.479445   80762 cri.go:89] found id: ""
	I0612 21:42:02.479476   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.479487   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:02.479495   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:02.479557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:02.517654   80762 cri.go:89] found id: ""
	I0612 21:42:02.517685   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.517697   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:02.517705   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:02.517775   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:02.562743   80762 cri.go:89] found id: ""
	I0612 21:42:02.562777   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.562788   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:02.562807   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:02.562873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:02.597775   80762 cri.go:89] found id: ""
	I0612 21:42:02.597805   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.597816   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:02.597824   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:02.597886   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:02.633869   80762 cri.go:89] found id: ""
	I0612 21:42:02.633901   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.633913   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:02.633921   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:02.633979   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:02.671931   80762 cri.go:89] found id: ""
	I0612 21:42:02.671962   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.671974   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:02.671982   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:02.672044   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:02.709162   80762 cri.go:89] found id: ""
	I0612 21:42:02.709192   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.709204   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:02.709214   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:02.709228   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:02.722937   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:02.722967   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:02.798249   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:02.798275   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:02.798292   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:02.876339   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:02.876376   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.913080   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:02.913106   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.464433   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:05.478249   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:05.478326   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:05.520742   80762 cri.go:89] found id: ""
	I0612 21:42:05.520765   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.520772   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:05.520778   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:05.520840   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:05.564864   80762 cri.go:89] found id: ""
	I0612 21:42:05.564896   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.564904   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:05.564911   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:05.564956   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:05.602917   80762 cri.go:89] found id: ""
	I0612 21:42:05.602942   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.602950   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:05.602956   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:05.603040   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:05.645073   80762 cri.go:89] found id: ""
	I0612 21:42:05.645104   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.645112   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:05.645119   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:05.645166   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:05.684133   80762 cri.go:89] found id: ""
	I0612 21:42:05.684165   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.684176   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:05.684184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:05.684249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:05.721461   80762 cri.go:89] found id: ""
	I0612 21:42:05.721489   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.721499   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:05.721506   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:05.721573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:05.756710   80762 cri.go:89] found id: ""
	I0612 21:42:05.756744   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.756755   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:05.756763   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:05.756814   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:05.792182   80762 cri.go:89] found id: ""
	I0612 21:42:05.792210   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.792220   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:05.792230   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:05.792245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:05.836597   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:05.836632   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.888704   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:05.888742   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:05.903354   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:05.903387   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:05.976146   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:05.976169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:05.976183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:08.559612   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:08.573592   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:08.573648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:08.613347   80762 cri.go:89] found id: ""
	I0612 21:42:08.613373   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.613381   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:08.613387   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:08.613449   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:08.650606   80762 cri.go:89] found id: ""
	I0612 21:42:08.650634   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.650643   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:08.650648   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:08.650692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:08.687056   80762 cri.go:89] found id: ""
	I0612 21:42:08.687087   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.687097   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:08.687105   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:08.687191   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:08.723112   80762 cri.go:89] found id: ""
	I0612 21:42:08.723138   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.723146   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:08.723161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:08.723238   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:08.764772   80762 cri.go:89] found id: ""
	I0612 21:42:08.764801   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.764812   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:08.764820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:08.764888   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:08.801914   80762 cri.go:89] found id: ""
	I0612 21:42:08.801944   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.801954   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:08.801962   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:08.802025   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:08.837991   80762 cri.go:89] found id: ""
	I0612 21:42:08.838017   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.838025   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:08.838030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:08.838084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:08.874977   80762 cri.go:89] found id: ""
	I0612 21:42:08.875016   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.875027   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:08.875039   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:08.875058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:08.931628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:08.931659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:08.946763   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:08.946791   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:09.028039   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:09.028061   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:09.028079   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:09.104350   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:09.104406   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.645114   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:11.659382   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:11.659455   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:11.702205   80762 cri.go:89] found id: ""
	I0612 21:42:11.702236   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.702246   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:11.702254   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:11.702309   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:11.748328   80762 cri.go:89] found id: ""
	I0612 21:42:11.748350   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.748357   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:11.748362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:11.748408   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:11.788980   80762 cri.go:89] found id: ""
	I0612 21:42:11.789009   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.789020   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:11.789027   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:11.789083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:11.829886   80762 cri.go:89] found id: ""
	I0612 21:42:11.829910   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.829920   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:11.829928   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:11.830006   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:11.870088   80762 cri.go:89] found id: ""
	I0612 21:42:11.870120   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.870131   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:11.870138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:11.870201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:11.907862   80762 cri.go:89] found id: ""
	I0612 21:42:11.907893   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.907905   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:11.907913   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:11.907973   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:11.947773   80762 cri.go:89] found id: ""
	I0612 21:42:11.947798   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.947808   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:11.947816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:11.947876   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:11.987806   80762 cri.go:89] found id: ""
	I0612 21:42:11.987837   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.987848   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:11.987859   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:11.987878   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:12.043451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:12.043481   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:12.057946   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:12.057980   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:12.134265   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:12.134298   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:12.134310   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:12.221276   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:12.221315   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:14.760949   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:14.775242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:14.775356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:14.818818   80762 cri.go:89] found id: ""
	I0612 21:42:14.818847   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.818856   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:14.818863   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:14.818931   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:14.859106   80762 cri.go:89] found id: ""
	I0612 21:42:14.859146   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.859157   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:14.859164   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:14.859247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:14.894993   80762 cri.go:89] found id: ""
	I0612 21:42:14.895016   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.895026   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:14.895037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:14.895087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:14.943534   80762 cri.go:89] found id: ""
	I0612 21:42:14.943561   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.943572   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:14.943579   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:14.943645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:14.985243   80762 cri.go:89] found id: ""
	I0612 21:42:14.985267   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.985274   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:14.985280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:14.985328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:15.029253   80762 cri.go:89] found id: ""
	I0612 21:42:15.029286   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.029297   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:15.029305   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:15.029371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:15.063471   80762 cri.go:89] found id: ""
	I0612 21:42:15.063499   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.063510   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:15.063517   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:15.063580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:15.101152   80762 cri.go:89] found id: ""
	I0612 21:42:15.101181   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.101201   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:15.101212   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:15.101227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:15.178398   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:15.178416   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:15.178429   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:15.255420   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:15.255468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:15.295492   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:15.295519   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:15.345010   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:15.345051   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:17.862640   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.879256   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.879333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.918910   80762 cri.go:89] found id: ""
	I0612 21:42:17.918940   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.918951   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:17.918958   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.919018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.959701   80762 cri.go:89] found id: ""
	I0612 21:42:17.959726   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.959734   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:17.959739   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.959820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:18.005096   80762 cri.go:89] found id: ""
	I0612 21:42:18.005125   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.005142   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:18.005150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:18.005211   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:18.046877   80762 cri.go:89] found id: ""
	I0612 21:42:18.046907   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.046919   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:18.046927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:18.046992   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:18.087907   80762 cri.go:89] found id: ""
	I0612 21:42:18.087934   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.087946   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:18.087953   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:18.088016   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:18.139423   80762 cri.go:89] found id: ""
	I0612 21:42:18.139453   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.139464   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:18.139473   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:18.139535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:18.180433   80762 cri.go:89] found id: ""
	I0612 21:42:18.180459   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.180469   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:18.180476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:18.180537   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:18.220966   80762 cri.go:89] found id: ""
	I0612 21:42:18.220996   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.221005   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:18.221015   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.221033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.276006   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.276031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.290975   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:18.291026   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:18.369318   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:18.369345   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.369359   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.451336   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.451380   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.016353   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:21.030544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.030611   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.072558   80762 cri.go:89] found id: ""
	I0612 21:42:21.072583   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.072591   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:21.072596   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.072649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.106320   80762 cri.go:89] found id: ""
	I0612 21:42:21.106352   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.106364   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:21.106372   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.106431   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.139155   80762 cri.go:89] found id: ""
	I0612 21:42:21.139201   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.139212   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:21.139220   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.139285   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.178731   80762 cri.go:89] found id: ""
	I0612 21:42:21.178762   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.178772   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:21.178779   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.178838   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.213606   80762 cri.go:89] found id: ""
	I0612 21:42:21.213635   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.213645   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:21.213652   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.213707   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.250966   80762 cri.go:89] found id: ""
	I0612 21:42:21.250993   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.251009   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:21.251017   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.251084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.289434   80762 cri.go:89] found id: ""
	I0612 21:42:21.289457   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.289465   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.289474   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:21.289520   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:21.329028   80762 cri.go:89] found id: ""
	I0612 21:42:21.329058   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.329069   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:21.329080   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:21.329098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:21.342621   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:21.342648   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:21.418742   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:21.418766   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:21.418779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:21.493909   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:21.493944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.534693   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:21.534723   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:24.086466   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:24.101820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:24.101877   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:24.145732   80762 cri.go:89] found id: ""
	I0612 21:42:24.145757   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.145767   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:24.145774   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:24.145832   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:24.182765   80762 cri.go:89] found id: ""
	I0612 21:42:24.182788   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.182795   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:24.182801   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:24.182889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:24.235093   80762 cri.go:89] found id: ""
	I0612 21:42:24.235121   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.235129   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:24.235134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:24.235208   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:24.269788   80762 cri.go:89] found id: ""
	I0612 21:42:24.269809   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.269816   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:24.269822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:24.269867   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:24.306594   80762 cri.go:89] found id: ""
	I0612 21:42:24.306620   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.306628   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:24.306634   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:24.306693   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:24.343766   80762 cri.go:89] found id: ""
	I0612 21:42:24.343786   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.343795   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:24.343802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:24.343858   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:24.384417   80762 cri.go:89] found id: ""
	I0612 21:42:24.384447   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.384457   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:24.384464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:24.384524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:24.424935   80762 cri.go:89] found id: ""
	I0612 21:42:24.424958   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.424965   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:24.424974   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:24.424988   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:24.499737   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:24.499771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:24.537631   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:24.537667   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:24.593743   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:24.593779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:24.608078   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:24.608107   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:24.679729   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.180828   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:27.195484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:27.195552   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:27.235725   80762 cri.go:89] found id: ""
	I0612 21:42:27.235750   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.235761   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:27.235768   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:27.235816   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:27.279763   80762 cri.go:89] found id: ""
	I0612 21:42:27.279795   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.279806   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:27.279814   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:27.279875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:27.320510   80762 cri.go:89] found id: ""
	I0612 21:42:27.320534   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.320543   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:27.320554   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:27.320641   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:27.359195   80762 cri.go:89] found id: ""
	I0612 21:42:27.359227   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.359239   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:27.359247   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:27.359312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:27.394977   80762 cri.go:89] found id: ""
	I0612 21:42:27.395004   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.395015   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:27.395033   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:27.395099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:27.431905   80762 cri.go:89] found id: ""
	I0612 21:42:27.431925   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.431933   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:27.431945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:27.431990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:27.469929   80762 cri.go:89] found id: ""
	I0612 21:42:27.469954   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.469961   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:27.469967   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:27.470024   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:27.505128   80762 cri.go:89] found id: ""
	I0612 21:42:27.505153   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.505160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:27.505169   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:27.505180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:27.556739   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:27.556771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:27.572730   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:27.572757   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:27.646797   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.646819   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:27.646836   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:27.726554   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:27.726599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:30.268770   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:30.282575   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:30.282635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:30.321243   80762 cri.go:89] found id: ""
	I0612 21:42:30.321276   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.321288   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:30.321295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:30.321342   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:30.359403   80762 cri.go:89] found id: ""
	I0612 21:42:30.359432   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.359443   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:30.359451   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:30.359505   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:30.395967   80762 cri.go:89] found id: ""
	I0612 21:42:30.396006   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.396015   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:30.396028   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:30.396087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:30.438093   80762 cri.go:89] found id: ""
	I0612 21:42:30.438123   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.438132   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:30.438138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:30.438192   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:30.476859   80762 cri.go:89] found id: ""
	I0612 21:42:30.476888   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.476898   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:30.476905   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:30.476968   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:30.512998   80762 cri.go:89] found id: ""
	I0612 21:42:30.513026   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.513037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:30.513045   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:30.513106   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:30.548822   80762 cri.go:89] found id: ""
	I0612 21:42:30.548847   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.548855   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:30.548861   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:30.548908   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:30.584385   80762 cri.go:89] found id: ""
	I0612 21:42:30.584417   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.584426   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:30.584439   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:30.584454   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:30.685995   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:30.686019   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:30.686030   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:30.778789   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:30.778827   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:30.819467   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:30.819511   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:30.872563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:30.872599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:33.387831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:33.401663   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:33.401740   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:33.439690   80762 cri.go:89] found id: ""
	I0612 21:42:33.439723   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.439735   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:33.439743   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:33.439805   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:33.480330   80762 cri.go:89] found id: ""
	I0612 21:42:33.480357   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.480365   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:33.480371   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:33.480422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:33.520367   80762 cri.go:89] found id: ""
	I0612 21:42:33.520396   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.520407   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:33.520415   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:33.520476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:33.556859   80762 cri.go:89] found id: ""
	I0612 21:42:33.556892   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.556904   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:33.556911   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:33.556963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:33.595982   80762 cri.go:89] found id: ""
	I0612 21:42:33.596014   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.596024   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:33.596030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:33.596091   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:33.630942   80762 cri.go:89] found id: ""
	I0612 21:42:33.630974   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.630986   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:33.630994   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:33.631055   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:33.671649   80762 cri.go:89] found id: ""
	I0612 21:42:33.671676   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.671684   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:33.671690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:33.671734   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:33.716664   80762 cri.go:89] found id: ""
	I0612 21:42:33.716690   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.716700   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:33.716711   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:33.716726   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:33.734168   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:33.734198   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:33.826469   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:33.826491   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:33.826507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:33.915109   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:33.915142   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:33.957969   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:33.958007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:36.515258   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:36.529603   80762 kubeadm.go:591] duration metric: took 4m4.234271001s to restartPrimaryControlPlane
	W0612 21:42:36.529688   80762 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:36.529719   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:41.545629   80762 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.01588354s)
	I0612 21:42:41.545734   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:41.561025   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:42:41.572788   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:42:41.583027   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:42:41.583052   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:42:41.583095   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:42:41.593433   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:42:41.593502   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:42:41.603944   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:42:41.613382   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:42:41.613432   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:42:41.622874   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.632270   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:42:41.632370   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.642072   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:42:41.652120   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:42:41.652194   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:42:41.662684   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:42:41.894903   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:44:37.700712   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:44:37.700862   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:44:37.702455   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:44:37.702552   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:44:37.702639   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:44:37.702749   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:44:37.702887   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:44:37.702992   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:44:37.704955   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:44:37.705032   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:44:37.705088   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:44:37.705159   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:44:37.705228   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:44:37.705289   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:44:37.705368   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:44:37.705467   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:44:37.705538   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:44:37.705620   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:44:37.705683   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:44:37.705723   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:44:37.705773   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:44:37.705816   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:44:37.705861   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:44:37.705917   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:44:37.705964   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:44:37.706062   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:44:37.706172   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:44:37.706231   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:44:37.706288   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:44:37.707753   80762 out.go:204]   - Booting up control plane ...
	I0612 21:44:37.707857   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:44:37.707931   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:44:37.707994   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:44:37.708064   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:44:37.708197   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:44:37.708251   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:44:37.708344   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708536   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708600   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708770   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708864   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709067   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709133   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709340   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709441   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709638   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709650   80762 kubeadm.go:309] 
	I0612 21:44:37.709683   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:44:37.709721   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:44:37.709728   80762 kubeadm.go:309] 
	I0612 21:44:37.709777   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:44:37.709817   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:44:37.709910   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:44:37.709917   80762 kubeadm.go:309] 
	I0612 21:44:37.710018   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:44:37.710052   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:44:37.710083   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:44:37.710089   80762 kubeadm.go:309] 
	I0612 21:44:37.710184   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:44:37.710259   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:44:37.710265   80762 kubeadm.go:309] 
	I0612 21:44:37.710359   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:44:37.710431   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:44:37.710497   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:44:37.710563   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:44:37.710607   80762 kubeadm.go:309] 
	W0612 21:44:37.710666   80762 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0612 21:44:37.710709   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:44:38.170461   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:38.186842   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:44:38.198380   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:44:38.198400   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:44:38.198454   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:44:38.208876   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:44:38.208948   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:44:38.219641   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:44:38.229622   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:44:38.229685   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:44:38.240153   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.251342   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:44:38.251401   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.262662   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:44:38.272898   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:44:38.272954   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:44:38.283213   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:44:38.501637   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:46:34.582636   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:46:34.582745   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:46:34.584702   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:46:34.584775   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:46:34.584898   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:46:34.585029   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:46:34.585172   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:46:34.585263   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:46:34.587030   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:46:34.587101   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:46:34.587160   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:46:34.587260   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:46:34.587349   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:46:34.587446   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:46:34.587521   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:46:34.587609   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:46:34.587697   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:46:34.587803   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:46:34.587886   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:46:34.588014   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:46:34.588097   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:46:34.588177   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:46:34.588268   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:46:34.588381   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:46:34.588447   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:46:34.588558   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:46:34.588659   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:46:34.588719   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:46:34.588816   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:46:34.590114   80762 out.go:204]   - Booting up control plane ...
	I0612 21:46:34.590226   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:46:34.590326   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:46:34.590444   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:46:34.590527   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:46:34.590710   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:46:34.590778   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:46:34.590847   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591054   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591149   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591411   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591508   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591743   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591846   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592108   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592205   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592395   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592403   80762 kubeadm.go:309] 
	I0612 21:46:34.592436   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:46:34.592485   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:46:34.592500   80762 kubeadm.go:309] 
	I0612 21:46:34.592535   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:46:34.592563   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:46:34.592677   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:46:34.592688   80762 kubeadm.go:309] 
	I0612 21:46:34.592820   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:46:34.592855   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:46:34.592883   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:46:34.592890   80762 kubeadm.go:309] 
	I0612 21:46:34.593007   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:46:34.593107   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:46:34.593116   80762 kubeadm.go:309] 
	I0612 21:46:34.593224   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:46:34.593342   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:46:34.593426   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:46:34.593494   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:46:34.593552   80762 kubeadm.go:393] duration metric: took 8m2.356271864s to StartCluster
	I0612 21:46:34.593558   80762 kubeadm.go:309] 
	I0612 21:46:34.593589   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:46:34.593639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:46:34.643842   80762 cri.go:89] found id: ""
	I0612 21:46:34.643876   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.643887   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:46:34.643905   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:46:34.643982   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:46:34.682878   80762 cri.go:89] found id: ""
	I0612 21:46:34.682899   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.682906   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:46:34.682912   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:46:34.682961   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:46:34.721931   80762 cri.go:89] found id: ""
	I0612 21:46:34.721955   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.721964   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:46:34.721969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:46:34.722021   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:46:34.759233   80762 cri.go:89] found id: ""
	I0612 21:46:34.759266   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.759274   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:46:34.759280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:46:34.759333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:46:34.800142   80762 cri.go:89] found id: ""
	I0612 21:46:34.800176   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.800186   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:46:34.800194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:46:34.800256   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:46:34.836746   80762 cri.go:89] found id: ""
	I0612 21:46:34.836774   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.836784   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:46:34.836791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:46:34.836850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:46:34.876108   80762 cri.go:89] found id: ""
	I0612 21:46:34.876138   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.876147   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:46:34.876153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:46:34.876202   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:46:34.912272   80762 cri.go:89] found id: ""
	I0612 21:46:34.912294   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.912301   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:46:34.912310   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:46:34.912324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:46:34.997300   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:46:34.997331   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:46:34.997347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:46:35.105602   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:46:35.105638   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:46:35.152818   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:46:35.152857   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:46:35.216504   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:46:35.216545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0612 21:46:35.239531   80762 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0612 21:46:35.239581   80762 out.go:239] * 
	* 
	W0612 21:46:35.239646   80762 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.239672   80762 out.go:239] * 
	* 
	W0612 21:46:35.240600   80762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:46:35.244822   80762 out.go:177] 
	W0612 21:46:35.246072   80762 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.246137   80762 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0612 21:46:35.246164   80762 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0612 21:46:35.247768   80762 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-983302 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 2 (232.455559ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-983302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-983302 logs -n 25: (1.58883522s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| delete  | -p bridge-701638                                       | bridge-701638                | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-576552 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | disable-driver-mounts-576552                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-087875             | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-376087  | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-591460            | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-983302        | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-087875                  | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-376087       | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:42 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-591460                 | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-983302             | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:33:52
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:33:52.855557   80762 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:33:52.855829   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.855839   80762 out.go:304] Setting ErrFile to fd 2...
	I0612 21:33:52.855845   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.856037   80762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:33:52.856582   80762 out.go:298] Setting JSON to false
	I0612 21:33:52.857472   80762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8178,"bootTime":1718219855,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:33:52.857527   80762 start.go:139] virtualization: kvm guest
	I0612 21:33:52.859369   80762 out.go:177] * [old-k8s-version-983302] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:33:52.860886   80762 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:33:52.860907   80762 notify.go:220] Checking for updates...
	I0612 21:33:52.862185   80762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:33:52.863642   80762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:33:52.865031   80762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:33:52.866306   80762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:33:52.867535   80762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:33:52.869148   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:33:52.869530   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.869597   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.884278   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0612 21:33:52.884743   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.885211   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.885234   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.885575   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.885768   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.887577   80762 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0612 21:33:52.888972   80762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:33:52.889265   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.889296   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.903649   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
	I0612 21:33:52.904087   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.904500   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.904518   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.904831   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.904988   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.939030   80762 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:33:52.940484   80762 start.go:297] selected driver: kvm2
	I0612 21:33:52.940497   80762 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.940622   80762 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:33:52.941314   80762 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.941389   80762 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:33:52.956273   80762 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:33:52.956646   80762 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:33:52.956674   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:33:52.956682   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:33:52.956715   80762 start.go:340] cluster config:
	{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.956828   80762 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.958634   80762 out.go:177] * Starting "old-k8s-version-983302" primary control-plane node in "old-k8s-version-983302" cluster
	I0612 21:33:52.959924   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:33:52.959963   80762 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0612 21:33:52.959970   80762 cache.go:56] Caching tarball of preloaded images
	I0612 21:33:52.960065   80762 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:33:52.960079   80762 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0612 21:33:52.960190   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:33:52.960397   80762 start.go:360] acquireMachinesLock for old-k8s-version-983302: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:33:57.423439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:00.495475   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:06.575478   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:09.647560   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:15.727510   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:18.799491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:24.879423   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:27.951495   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:34.031457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:37.103569   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:43.183470   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:46.255491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:52.335452   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:55.407544   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:01.487489   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:04.559546   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:10.639492   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:13.711372   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:19.791460   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:22.863455   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:28.943506   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:32.015443   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:38.095436   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:41.167526   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:47.247485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:50.319435   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:56.399471   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:59.471485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:05.551493   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:08.623467   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:14.703401   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:17.775479   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:23.855516   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:26.927418   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:33.007439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:36.079449   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:42.159480   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:45.231482   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:51.311424   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:54.383524   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:00.463466   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:03.535465   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:09.615457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:12.687462   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:18.767463   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:21.839431   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:24.843967   80243 start.go:364] duration metric: took 4m34.377488728s to acquireMachinesLock for "default-k8s-diff-port-376087"
	I0612 21:37:24.844034   80243 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:24.844046   80243 fix.go:54] fixHost starting: 
	I0612 21:37:24.844649   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:24.844689   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:24.859743   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0612 21:37:24.860227   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:24.860659   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:37:24.860680   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:24.861055   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:24.861352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:24.861550   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:37:24.863507   80243 fix.go:112] recreateIfNeeded on default-k8s-diff-port-376087: state=Stopped err=<nil>
	I0612 21:37:24.863538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	W0612 21:37:24.863708   80243 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:24.865564   80243 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-376087" ...
	I0612 21:37:24.866899   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Start
	I0612 21:37:24.867064   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring networks are active...
	I0612 21:37:24.867951   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network default is active
	I0612 21:37:24.868390   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network mk-default-k8s-diff-port-376087 is active
	I0612 21:37:24.868746   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Getting domain xml...
	I0612 21:37:24.869408   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Creating domain...
	I0612 21:37:24.841481   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:24.841529   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.841912   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:37:24.841938   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.842149   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:37:24.843818   80157 machine.go:97] duration metric: took 4m37.413209096s to provisionDockerMachine
	I0612 21:37:24.843853   80157 fix.go:56] duration metric: took 4m37.434262933s for fixHost
	I0612 21:37:24.843860   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 4m37.434303466s
	W0612 21:37:24.843897   80157 start.go:713] error starting host: provision: host is not running
	W0612 21:37:24.843971   80157 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0612 21:37:24.843980   80157 start.go:728] Will try again in 5 seconds ...
	I0612 21:37:26.077364   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting to get IP...
	I0612 21:37:26.078173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078646   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078686   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.078611   81491 retry.go:31] will retry after 224.429366ms: waiting for machine to come up
	I0612 21:37:26.305227   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305668   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305699   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.305627   81491 retry.go:31] will retry after 298.325251ms: waiting for machine to come up
	I0612 21:37:26.605155   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605587   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605622   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.605558   81491 retry.go:31] will retry after 327.789765ms: waiting for machine to come up
	I0612 21:37:26.935066   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935536   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935567   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.935477   81491 retry.go:31] will retry after 381.56012ms: waiting for machine to come up
	I0612 21:37:27.319036   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.319429   81491 retry.go:31] will retry after 474.663822ms: waiting for machine to come up
	I0612 21:37:27.796149   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796596   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796635   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.796564   81491 retry.go:31] will retry after 943.868595ms: waiting for machine to come up
	I0612 21:37:28.741715   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742226   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:28.742180   81491 retry.go:31] will retry after 1.014472282s: waiting for machine to come up
	I0612 21:37:29.758384   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758928   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758947   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:29.758867   81491 retry.go:31] will retry after 971.872729ms: waiting for machine to come up
	I0612 21:37:29.845647   80157 start.go:360] acquireMachinesLock for no-preload-087875: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:37:30.732362   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732794   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:30.732742   81491 retry.go:31] will retry after 1.352202491s: waiting for machine to come up
	I0612 21:37:32.087272   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087702   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087726   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:32.087663   81491 retry.go:31] will retry after 2.276552983s: waiting for machine to come up
	I0612 21:37:34.367159   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367579   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367613   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:34.367520   81491 retry.go:31] will retry after 1.785262755s: waiting for machine to come up
	I0612 21:37:36.154927   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155412   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:36.155357   81491 retry.go:31] will retry after 3.309693081s: waiting for machine to come up
	I0612 21:37:39.468800   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469443   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469469   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:39.469393   81491 retry.go:31] will retry after 4.284995408s: waiting for machine to come up
	I0612 21:37:45.096430   80404 start.go:364] duration metric: took 4m40.295909999s to acquireMachinesLock for "embed-certs-591460"
	I0612 21:37:45.096485   80404 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:45.096490   80404 fix.go:54] fixHost starting: 
	I0612 21:37:45.096932   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:45.096972   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:45.113819   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
	I0612 21:37:45.114290   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:45.114823   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:37:45.114843   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:45.115208   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:45.115415   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:37:45.115578   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:37:45.117131   80404 fix.go:112] recreateIfNeeded on embed-certs-591460: state=Stopped err=<nil>
	I0612 21:37:45.117156   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	W0612 21:37:45.117324   80404 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:45.119535   80404 out.go:177] * Restarting existing kvm2 VM for "embed-certs-591460" ...
	I0612 21:37:43.759195   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759548   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Found IP for machine: 192.168.61.80
	I0612 21:37:43.759575   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has current primary IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759583   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserving static IP address...
	I0612 21:37:43.760031   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserved static IP address: 192.168.61.80
	I0612 21:37:43.760063   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.760075   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for SSH to be available...
	I0612 21:37:43.760120   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | skip adding static IP to network mk-default-k8s-diff-port-376087 - found existing host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"}
	I0612 21:37:43.760134   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Getting to WaitForSSH function...
	I0612 21:37:43.762259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.762626   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762741   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH client type: external
	I0612 21:37:43.762771   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa (-rw-------)
	I0612 21:37:43.762804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:37:43.762842   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | About to run SSH command:
	I0612 21:37:43.762860   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | exit 0
	I0612 21:37:43.891446   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | SSH cmd err, output: <nil>: 
	I0612 21:37:43.891831   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetConfigRaw
	I0612 21:37:43.892485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:43.895220   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895625   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.895656   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895928   80243 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/config.json ...
	I0612 21:37:43.896140   80243 machine.go:94] provisionDockerMachine start ...
	I0612 21:37:43.896161   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:43.896388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:43.898898   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899317   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.899346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899539   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:43.899727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.899868   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.900019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:43.900171   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:43.900360   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:43.900371   80243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:37:44.016295   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:37:44.016327   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016577   80243 buildroot.go:166] provisioning hostname "default-k8s-diff-port-376087"
	I0612 21:37:44.016602   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.019396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019732   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.019763   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019881   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.020084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020214   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020418   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.020612   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.020803   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.020820   80243 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376087 && echo "default-k8s-diff-port-376087" | sudo tee /etc/hostname
	I0612 21:37:44.146019   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376087
	
	I0612 21:37:44.146049   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.148758   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149204   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.149238   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149356   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.149538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149873   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.150013   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.150187   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.150204   80243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376087/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:37:44.272821   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:44.272852   80243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:37:44.272887   80243 buildroot.go:174] setting up certificates
	I0612 21:37:44.272895   80243 provision.go:84] configureAuth start
	I0612 21:37:44.272903   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.273185   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:44.275991   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276337   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.276366   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276591   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.279011   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279370   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.279396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279521   80243 provision.go:143] copyHostCerts
	I0612 21:37:44.279576   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:37:44.279585   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:37:44.279649   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:37:44.279740   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:37:44.279748   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:37:44.279770   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:37:44.279828   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:37:44.279835   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:37:44.279855   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:37:44.279914   80243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376087 san=[127.0.0.1 192.168.61.80 default-k8s-diff-port-376087 localhost minikube]
	I0612 21:37:44.410909   80243 provision.go:177] copyRemoteCerts
	I0612 21:37:44.410974   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:37:44.410999   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.413740   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414140   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.414173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414406   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.414597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.414759   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.414904   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.501641   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:37:44.526082   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0612 21:37:44.549455   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:37:44.572447   80243 provision.go:87] duration metric: took 299.539656ms to configureAuth
	I0612 21:37:44.572473   80243 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:37:44.572632   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:37:44.572731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.575518   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.575913   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.575948   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.576170   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.576383   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576553   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576754   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.576913   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.577134   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.577155   80243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:37:44.851891   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:37:44.851922   80243 machine.go:97] duration metric: took 955.766062ms to provisionDockerMachine
	I0612 21:37:44.851936   80243 start.go:293] postStartSetup for "default-k8s-diff-port-376087" (driver="kvm2")
	I0612 21:37:44.851951   80243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:37:44.851970   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:44.852318   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:37:44.852352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.855231   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855556   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.855595   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.855935   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.856127   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.856260   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.941821   80243 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:37:44.946013   80243 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:37:44.946052   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:37:44.946120   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:37:44.946200   80243 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:37:44.946281   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:37:44.955467   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:44.979379   80243 start.go:296] duration metric: took 127.428385ms for postStartSetup
	I0612 21:37:44.979421   80243 fix.go:56] duration metric: took 20.135375416s for fixHost
	I0612 21:37:44.979445   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.981891   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.982287   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982520   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.982713   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.982920   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.983040   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.983220   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.983450   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.983467   80243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:37:45.096266   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228265.072559389
	
	I0612 21:37:45.096288   80243 fix.go:216] guest clock: 1718228265.072559389
	I0612 21:37:45.096295   80243 fix.go:229] Guest: 2024-06-12 21:37:45.072559389 +0000 UTC Remote: 2024-06-12 21:37:44.979426071 +0000 UTC m=+294.653210040 (delta=93.133318ms)
	I0612 21:37:45.096313   80243 fix.go:200] guest clock delta is within tolerance: 93.133318ms
	I0612 21:37:45.096318   80243 start.go:83] releasing machines lock for "default-k8s-diff-port-376087", held for 20.252307995s
	I0612 21:37:45.096346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.096683   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:45.099332   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099761   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.099805   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099902   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100560   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100841   80243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:37:45.100880   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.100981   80243 ssh_runner.go:195] Run: cat /version.json
	I0612 21:37:45.101007   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.103590   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.103774   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104052   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104186   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104202   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104210   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104417   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104430   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104650   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104651   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104837   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104852   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.104993   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.208199   80243 ssh_runner.go:195] Run: systemctl --version
	I0612 21:37:45.214375   80243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:37:45.370991   80243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:37:45.378676   80243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:37:45.378744   80243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:37:45.400622   80243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:37:45.400642   80243 start.go:494] detecting cgroup driver to use...
	I0612 21:37:45.400709   80243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:37:45.416775   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:37:45.430261   80243 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:37:45.430314   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:37:45.445482   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:37:45.461471   80243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:37:45.578411   80243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:37:45.750493   80243 docker.go:233] disabling docker service ...
	I0612 21:37:45.750556   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:37:45.769072   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:37:45.784755   80243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:37:45.907970   80243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:37:46.031847   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:37:46.046473   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:37:46.067764   80243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:37:46.067813   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.080604   80243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:37:46.080660   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.093611   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.104443   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.117070   80243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:37:46.128759   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.139977   80243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.157893   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.168896   80243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:37:46.179765   80243 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:37:46.179816   80243 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:37:46.194059   80243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:37:46.205474   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:46.322562   80243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:37:46.479073   80243 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:37:46.479149   80243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:37:46.484557   80243 start.go:562] Will wait 60s for crictl version
	I0612 21:37:46.484609   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:37:46.488403   80243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:37:46.529210   80243 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:37:46.529301   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.561476   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.594477   80243 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:37:45.120900   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Start
	I0612 21:37:45.121084   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring networks are active...
	I0612 21:37:45.121776   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network default is active
	I0612 21:37:45.122108   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network mk-embed-certs-591460 is active
	I0612 21:37:45.122554   80404 main.go:141] libmachine: (embed-certs-591460) Getting domain xml...
	I0612 21:37:45.123260   80404 main.go:141] libmachine: (embed-certs-591460) Creating domain...
	I0612 21:37:46.357867   80404 main.go:141] libmachine: (embed-certs-591460) Waiting to get IP...
	I0612 21:37:46.358704   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.359164   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.359265   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.359144   81627 retry.go:31] will retry after 278.948395ms: waiting for machine to come up
	I0612 21:37:46.639971   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.640491   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.640523   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.640433   81627 retry.go:31] will retry after 342.550517ms: waiting for machine to come up
	I0612 21:37:46.985065   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.985590   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.985618   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.985548   81627 retry.go:31] will retry after 297.683214ms: waiting for machine to come up
	I0612 21:37:47.285192   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.285650   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.285688   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.285615   81627 retry.go:31] will retry after 415.994572ms: waiting for machine to come up
	I0612 21:37:47.702894   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.703398   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.703424   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.703353   81627 retry.go:31] will retry after 672.441633ms: waiting for machine to come up
	I0612 21:37:48.377227   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:48.377772   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:48.377802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:48.377735   81627 retry.go:31] will retry after 790.165478ms: waiting for machine to come up
	I0612 21:37:49.169651   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:49.170194   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:49.170224   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:49.170134   81627 retry.go:31] will retry after 953.609739ms: waiting for machine to come up
	I0612 21:37:46.595772   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:46.599221   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599682   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:46.599712   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599919   80243 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0612 21:37:46.604573   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:46.617274   80243 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:37:46.617388   80243 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:37:46.617443   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:46.663227   80243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:37:46.663306   80243 ssh_runner.go:195] Run: which lz4
	I0612 21:37:46.667878   80243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:37:46.672384   80243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:37:46.672416   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:37:48.195844   80243 crio.go:462] duration metric: took 1.527996646s to copy over tarball
	I0612 21:37:48.195908   80243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:37:50.125800   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:50.126305   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:50.126337   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:50.126260   81627 retry.go:31] will retry after 938.251336ms: waiting for machine to come up
	I0612 21:37:51.065851   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:51.066225   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:51.066247   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:51.066194   81627 retry.go:31] will retry after 1.635454683s: waiting for machine to come up
	I0612 21:37:52.704193   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:52.704663   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:52.704687   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:52.704633   81627 retry.go:31] will retry after 1.56455027s: waiting for machine to come up
	I0612 21:37:54.271391   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:54.271873   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:54.271919   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:54.271826   81627 retry.go:31] will retry after 2.052574222s: waiting for machine to come up
	I0612 21:37:50.464553   80243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.268615304s)
	I0612 21:37:50.464601   80243 crio.go:469] duration metric: took 2.268715227s to extract the tarball
	I0612 21:37:50.464612   80243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:37:50.502406   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:50.550796   80243 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:37:50.550821   80243 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:37:50.550831   80243 kubeadm.go:928] updating node { 192.168.61.80 8444 v1.30.1 crio true true} ...
	I0612 21:37:50.550957   80243 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:37:50.551042   80243 ssh_runner.go:195] Run: crio config
	I0612 21:37:50.603232   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:50.603256   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:50.603268   80243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:37:50.603299   80243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.80 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376087 NodeName:default-k8s-diff-port-376087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:37:50.603459   80243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.80
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-376087"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:37:50.603524   80243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:37:50.614003   80243 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:37:50.614082   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:37:50.623416   80243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0612 21:37:50.640203   80243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:37:50.656668   80243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0612 21:37:50.674601   80243 ssh_runner.go:195] Run: grep 192.168.61.80	control-plane.minikube.internal$ /etc/hosts
	I0612 21:37:50.678858   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:50.692389   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:50.822225   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:37:50.840703   80243 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087 for IP: 192.168.61.80
	I0612 21:37:50.840734   80243 certs.go:194] generating shared ca certs ...
	I0612 21:37:50.840758   80243 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:37:50.840936   80243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:37:50.840986   80243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:37:50.840999   80243 certs.go:256] generating profile certs ...
	I0612 21:37:50.841133   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/client.key
	I0612 21:37:50.841200   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key.0afce446
	I0612 21:37:50.841238   80243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key
	I0612 21:37:50.841357   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:37:50.841398   80243 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:37:50.841409   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:37:50.841438   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:37:50.841469   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:37:50.841489   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:37:50.841529   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:50.842311   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:37:50.880075   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:37:50.914504   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:37:50.945724   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:37:50.975702   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0612 21:37:51.009817   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:37:51.039086   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:37:51.064146   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:37:51.088483   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:37:51.112785   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:37:51.136192   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:37:51.159239   80243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:37:51.175719   80243 ssh_runner.go:195] Run: openssl version
	I0612 21:37:51.181707   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:37:51.193498   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198415   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198475   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.204601   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:37:51.216354   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:37:51.231979   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.236952   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.237018   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.243461   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:37:51.258481   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:37:51.273412   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279356   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279420   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.285551   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:37:51.298066   80243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:37:51.302791   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:37:51.309402   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:37:51.316170   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:37:51.322785   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:37:51.329066   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:37:51.335031   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:37:51.340945   80243 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:37:51.341082   80243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:37:51.341143   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.383011   80243 cri.go:89] found id: ""
	I0612 21:37:51.383134   80243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:37:51.394768   80243 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:37:51.394794   80243 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:37:51.394800   80243 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:37:51.394852   80243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:37:51.408147   80243 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:37:51.409094   80243 kubeconfig.go:125] found "default-k8s-diff-port-376087" server: "https://192.168.61.80:8444"
	I0612 21:37:51.411221   80243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:37:51.421897   80243 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.80
	I0612 21:37:51.421934   80243 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:37:51.421949   80243 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:37:51.422029   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.470321   80243 cri.go:89] found id: ""
	I0612 21:37:51.470441   80243 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:37:51.488369   80243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:37:51.498367   80243 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:37:51.498388   80243 kubeadm.go:156] found existing configuration files:
	
	I0612 21:37:51.498449   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0612 21:37:51.510212   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:37:51.510287   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:37:51.520231   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0612 21:37:51.529270   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:37:51.529339   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:37:51.538902   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.548593   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:37:51.548652   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.558533   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0612 21:37:51.567995   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:37:51.568063   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:37:51.577695   80243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:37:51.587794   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:51.718155   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.602448   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.820456   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.901167   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.977502   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:37:52.977606   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.477802   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.977879   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.995753   80243 api_server.go:72] duration metric: took 1.018251882s to wait for apiserver process to appear ...
	I0612 21:37:53.995788   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:37:53.995812   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:53.996308   80243 api_server.go:269] stopped: https://192.168.61.80:8444/healthz: Get "https://192.168.61.80:8444/healthz": dial tcp 192.168.61.80:8444: connect: connection refused
	I0612 21:37:54.496045   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.293362   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:37:57.293394   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:37:57.293408   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.395854   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.395886   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.496122   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.505090   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.505124   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.996334   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.000606   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:58.000646   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:58.496177   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.504422   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:37:58.513123   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:37:58.513150   80243 api_server.go:131] duration metric: took 4.517354722s to wait for apiserver health ...
	I0612 21:37:58.513158   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:58.513163   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:58.514696   80243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:37:56.325937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:56.326316   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:56.326343   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:56.326261   81627 retry.go:31] will retry after 3.51636746s: waiting for machine to come up
	I0612 21:37:58.516091   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:37:58.541034   80243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:37:58.585635   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:37:58.596829   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:37:58.596859   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:37:58.596867   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:37:58.596873   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:37:58.596883   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:37:58.596888   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 21:37:58.596893   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:37:58.596899   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:37:58.596904   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:37:58.596910   80243 system_pods.go:74] duration metric: took 11.248328ms to wait for pod list to return data ...
	I0612 21:37:58.596917   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:37:58.600081   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:37:58.600107   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:37:58.600119   80243 node_conditions.go:105] duration metric: took 3.197181ms to run NodePressure ...
	I0612 21:37:58.600134   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:58.911963   80243 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918455   80243 kubeadm.go:733] kubelet initialised
	I0612 21:37:58.918475   80243 kubeadm.go:734] duration metric: took 6.490654ms waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918482   80243 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:37:58.924427   80243 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.930290   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930329   80243 pod_ready.go:81] duration metric: took 5.86525ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.930339   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930346   80243 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.935394   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935416   80243 pod_ready.go:81] duration metric: took 5.061639ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.935426   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935431   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.940238   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940268   80243 pod_ready.go:81] duration metric: took 4.829842ms for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.940286   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940295   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.989649   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989686   80243 pod_ready.go:81] duration metric: took 49.380431ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.989702   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989711   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.389868   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389903   80243 pod_ready.go:81] duration metric: took 400.174877ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.389912   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389918   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.790398   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790425   80243 pod_ready.go:81] duration metric: took 400.499157ms for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.790435   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790449   80243 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:00.189506   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189533   80243 pod_ready.go:81] duration metric: took 399.075983ms for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:00.189551   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189559   80243 pod_ready.go:38] duration metric: took 1.271068537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:00.189574   80243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:38:00.201480   80243 ops.go:34] apiserver oom_adj: -16
	I0612 21:38:00.201504   80243 kubeadm.go:591] duration metric: took 8.806697524s to restartPrimaryControlPlane
	I0612 21:38:00.201514   80243 kubeadm.go:393] duration metric: took 8.860579681s to StartCluster
	I0612 21:38:00.201536   80243 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.201601   80243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:00.203106   80243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.203416   80243 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:38:00.205568   80243 out.go:177] * Verifying Kubernetes components...
	I0612 21:38:00.203448   80243 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:38:00.203614   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:00.207110   80243 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207120   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:00.207120   80243 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207143   80243 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207166   80243 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207193   80243 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:38:00.207187   80243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376087"
	I0612 21:38:00.207208   80243 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207222   80243 addons.go:243] addon metrics-server should already be in state true
	I0612 21:38:00.207230   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207263   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207490   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207511   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207519   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207544   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207553   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207572   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.222521   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0612 21:38:00.222979   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.223496   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.223523   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.223899   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.224519   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.224555   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.227511   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0612 21:38:00.227543   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33041
	I0612 21:38:00.227874   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.227930   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.228402   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228409   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228426   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228471   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228776   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228780   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228952   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.229291   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.229323   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.232640   80243 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.232662   80243 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:38:00.232690   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.233072   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.233103   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.240883   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38355
	I0612 21:38:00.241363   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.241839   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.241861   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.242217   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.242434   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.244544   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.244604   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0612 21:38:00.246924   80243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:38:00.244915   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.248406   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:38:00.248430   80243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:38:00.248451   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.248861   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.248887   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.249211   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.249431   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.251070   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.251137   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0612 21:38:00.252729   80243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:00.251644   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.252033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.252601   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.254033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.254079   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.254111   80243 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.254127   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:38:00.254148   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.254211   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.254399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.254515   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.254542   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.254712   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.254926   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.256878   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.256948   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.257836   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258073   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.258105   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.258993   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.259141   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.259283   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.272822   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I0612 21:38:00.273238   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.273710   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.273734   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.274221   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.274411   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.276056   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.276286   80243 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.276302   80243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:38:00.276323   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.279285   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279351   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.279400   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.279675   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.279809   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.279940   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.392656   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:00.411972   80243 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:00.502108   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.504572   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:38:00.504590   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:38:00.522021   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:38:00.522057   80243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:38:00.538366   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.541981   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:00.541999   80243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:38:00.561335   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:01.519955   80243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017815416s)
	I0612 21:38:01.520006   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520087   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520100   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520312   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520334   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520343   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520350   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520422   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520435   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520444   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520452   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520554   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520573   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520647   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520678   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.520680   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.528807   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.528827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.529143   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.529162   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.529166   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556376   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.556701   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556750   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.556762   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.556780   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556791   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.557157   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.557179   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.557190   80243 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-376087"
	I0612 21:38:01.559103   80243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:37:59.844024   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:59.844481   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:59.844505   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:59.844433   81627 retry.go:31] will retry after 3.77902453s: waiting for machine to come up
	I0612 21:38:03.626861   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627380   80404 main.go:141] libmachine: (embed-certs-591460) Found IP for machine: 192.168.39.147
	I0612 21:38:03.627399   80404 main.go:141] libmachine: (embed-certs-591460) Reserving static IP address...
	I0612 21:38:03.627416   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has current primary IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627917   80404 main.go:141] libmachine: (embed-certs-591460) Reserved static IP address: 192.168.39.147
	I0612 21:38:03.627964   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.627981   80404 main.go:141] libmachine: (embed-certs-591460) Waiting for SSH to be available...
	I0612 21:38:03.628011   80404 main.go:141] libmachine: (embed-certs-591460) DBG | skip adding static IP to network mk-embed-certs-591460 - found existing host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"}
	I0612 21:38:03.628030   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Getting to WaitForSSH function...
	I0612 21:38:03.630082   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630480   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.630581   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630762   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH client type: external
	I0612 21:38:03.630802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa (-rw-------)
	I0612 21:38:03.630846   80404 main.go:141] libmachine: (embed-certs-591460) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:03.630872   80404 main.go:141] libmachine: (embed-certs-591460) DBG | About to run SSH command:
	I0612 21:38:03.630882   80404 main.go:141] libmachine: (embed-certs-591460) DBG | exit 0
	I0612 21:38:03.755304   80404 main.go:141] libmachine: (embed-certs-591460) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:03.755720   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetConfigRaw
	I0612 21:38:03.756310   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:03.758608   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.758927   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.758966   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.759153   80404 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/config.json ...
	I0612 21:38:03.759390   80404 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:03.759414   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:03.759641   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.761954   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762215   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.762244   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762371   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.762525   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762689   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762842   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.762995   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.763183   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.763206   80404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:03.867900   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:03.867936   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868185   80404 buildroot.go:166] provisioning hostname "embed-certs-591460"
	I0612 21:38:03.868210   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868430   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.871347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871690   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.871721   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871816   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.871977   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872130   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872258   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.872408   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.872588   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.872604   80404 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-591460 && echo "embed-certs-591460" | sudo tee /etc/hostname
	I0612 21:38:03.990526   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-591460
	
	I0612 21:38:03.990550   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.993057   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993458   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.993485   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993646   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.993830   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.993985   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.994125   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.994297   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.994499   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.994524   80404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-591460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-591460/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-591460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:04.120595   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:04.120623   80404 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:04.120640   80404 buildroot.go:174] setting up certificates
	I0612 21:38:04.120650   80404 provision.go:84] configureAuth start
	I0612 21:38:04.120658   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:04.120910   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.123483   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.123854   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.123879   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.124153   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.126901   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127293   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.127318   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127494   80404 provision.go:143] copyHostCerts
	I0612 21:38:04.127554   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:04.127566   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:04.127635   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:04.127736   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:04.127747   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:04.127785   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:04.127860   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:04.127870   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:04.127896   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:04.127960   80404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.embed-certs-591460 san=[127.0.0.1 192.168.39.147 embed-certs-591460 localhost minikube]
	I0612 21:38:04.265296   80404 provision.go:177] copyRemoteCerts
	I0612 21:38:04.265361   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:04.265392   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.267703   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268044   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.268090   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268244   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.268421   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.268583   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.268780   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.349440   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:04.374868   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0612 21:38:04.398419   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:04.423319   80404 provision.go:87] duration metric: took 302.657777ms to configureAuth
	I0612 21:38:04.423353   80404 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:04.423514   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:04.423586   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.426301   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426612   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.426641   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426796   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.426971   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427186   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427331   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.427553   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.427723   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.427739   80404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:04.689161   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:04.689199   80404 machine.go:97] duration metric: took 929.790838ms to provisionDockerMachine
	I0612 21:38:04.689212   80404 start.go:293] postStartSetup for "embed-certs-591460" (driver="kvm2")
	I0612 21:38:04.689223   80404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:04.689242   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.689569   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:04.689616   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.692484   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.692841   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.692864   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.693002   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.693191   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.693326   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.693469   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.923975   80762 start.go:364] duration metric: took 4m11.963543792s to acquireMachinesLock for "old-k8s-version-983302"
	I0612 21:38:04.924056   80762 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:04.924068   80762 fix.go:54] fixHost starting: 
	I0612 21:38:04.924507   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:04.924543   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:04.942022   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0612 21:38:04.942428   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:04.942891   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:38:04.942917   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:04.943345   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:04.943553   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:04.943726   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetState
	I0612 21:38:04.945403   80762 fix.go:112] recreateIfNeeded on old-k8s-version-983302: state=Stopped err=<nil>
	I0612 21:38:04.945427   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	W0612 21:38:04.945581   80762 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:04.947672   80762 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-983302" ...
	I0612 21:38:01.560387   80243 addons.go:510] duration metric: took 1.356939902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:38:02.416070   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.416451   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.774287   80404 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:04.778568   80404 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:04.778596   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:04.778667   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:04.778740   80404 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:04.778819   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:04.788602   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:04.813969   80404 start.go:296] duration metric: took 124.741162ms for postStartSetup
	I0612 21:38:04.814020   80404 fix.go:56] duration metric: took 19.717527303s for fixHost
	I0612 21:38:04.814049   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.816907   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817268   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.817294   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817511   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.817728   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.817905   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.818087   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.818293   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.818501   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.818516   80404 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:04.923846   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228284.879920542
	
	I0612 21:38:04.923868   80404 fix.go:216] guest clock: 1718228284.879920542
	I0612 21:38:04.923874   80404 fix.go:229] Guest: 2024-06-12 21:38:04.879920542 +0000 UTC Remote: 2024-06-12 21:38:04.814026698 +0000 UTC m=+300.152179547 (delta=65.893844ms)
	I0612 21:38:04.923890   80404 fix.go:200] guest clock delta is within tolerance: 65.893844ms
	I0612 21:38:04.923894   80404 start.go:83] releasing machines lock for "embed-certs-591460", held for 19.827427255s
	I0612 21:38:04.923920   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.924155   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.926708   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927102   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.927146   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927281   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927788   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927955   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.928043   80404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:04.928099   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.928158   80404 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:04.928182   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.930931   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931237   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931377   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931415   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931561   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931587   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931592   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931742   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931790   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.932111   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.932127   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.932250   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:05.009184   80404 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:05.035746   80404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:05.181527   80404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:05.189035   80404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:05.189113   80404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:05.205860   80404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:05.205886   80404 start.go:494] detecting cgroup driver to use...
	I0612 21:38:05.205957   80404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:05.223913   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:05.239598   80404 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:05.239679   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:05.253501   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:05.268094   80404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:05.397260   80404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:05.560454   80404 docker.go:233] disabling docker service ...
	I0612 21:38:05.560532   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:05.579197   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:05.593420   80404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:05.728145   80404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:05.860041   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:05.876025   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:05.895242   80404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:05.895336   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.906575   80404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:05.906662   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.918248   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.929178   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.942169   80404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:05.953542   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.969045   80404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.989509   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:06.001532   80404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:06.012676   80404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:06.012740   80404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:06.030028   80404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:06.048168   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:06.190039   80404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:06.349088   80404 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:06.349151   80404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:06.355251   80404 start.go:562] Will wait 60s for crictl version
	I0612 21:38:06.355321   80404 ssh_runner.go:195] Run: which crictl
	I0612 21:38:06.359456   80404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:06.400450   80404 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:06.400525   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.430078   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.461616   80404 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:04.949078   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .Start
	I0612 21:38:04.949226   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring networks are active...
	I0612 21:38:04.949936   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network default is active
	I0612 21:38:04.950371   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network mk-old-k8s-version-983302 is active
	I0612 21:38:04.950813   80762 main.go:141] libmachine: (old-k8s-version-983302) Getting domain xml...
	I0612 21:38:04.951549   80762 main.go:141] libmachine: (old-k8s-version-983302) Creating domain...
	I0612 21:38:06.296150   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting to get IP...
	I0612 21:38:06.296978   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.297465   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.297570   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.297453   81824 retry.go:31] will retry after 256.609938ms: waiting for machine to come up
	I0612 21:38:06.556307   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.556935   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.556967   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.556884   81824 retry.go:31] will retry after 285.754887ms: waiting for machine to come up
	I0612 21:38:06.844674   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.845227   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.845255   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.845171   81824 retry.go:31] will retry after 326.266367ms: waiting for machine to come up
	I0612 21:38:07.172788   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.173414   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.173447   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.173353   81824 retry.go:31] will retry after 393.443927ms: waiting for machine to come up
	I0612 21:38:07.568084   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.568645   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.568673   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.568609   81824 retry.go:31] will retry after 726.66775ms: waiting for machine to come up
	I0612 21:38:06.462860   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:06.466111   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466521   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:06.466551   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466837   80404 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:06.471361   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:06.485595   80404 kubeadm.go:877] updating cluster {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:06.485718   80404 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:06.485761   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:06.528708   80404 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:06.528778   80404 ssh_runner.go:195] Run: which lz4
	I0612 21:38:06.533340   80404 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:06.538076   80404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:06.538115   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:38:08.078495   80404 crio.go:462] duration metric: took 1.545201872s to copy over tarball
	I0612 21:38:08.078573   80404 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:06.917632   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:07.916734   80243 node_ready.go:49] node "default-k8s-diff-port-376087" has status "Ready":"True"
	I0612 21:38:07.916763   80243 node_ready.go:38] duration metric: took 7.504763576s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:07.916775   80243 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:07.924249   80243 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931751   80243 pod_ready.go:92] pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.931773   80243 pod_ready.go:81] duration metric: took 7.493608ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931782   80243 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937804   80243 pod_ready.go:92] pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.937880   80243 pod_ready.go:81] duration metric: took 6.090191ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937904   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:09.944927   80243 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:08.296811   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.297295   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.297319   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.297250   81824 retry.go:31] will retry after 658.540746ms: waiting for machine to come up
	I0612 21:38:08.957164   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.957611   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.957635   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.957576   81824 retry.go:31] will retry after 921.725713ms: waiting for machine to come up
	I0612 21:38:09.880881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:09.881672   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:09.881703   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:09.881604   81824 retry.go:31] will retry after 1.355846361s: waiting for machine to come up
	I0612 21:38:11.238616   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:11.239058   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:11.239094   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:11.238996   81824 retry.go:31] will retry after 1.3469357s: waiting for machine to come up
	I0612 21:38:12.587245   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:12.587747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:12.587785   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:12.587683   81824 retry.go:31] will retry after 1.616666063s: waiting for machine to come up
	I0612 21:38:10.426384   80404 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.347778968s)
	I0612 21:38:10.426418   80404 crio.go:469] duration metric: took 2.347893056s to extract the tarball
	I0612 21:38:10.426427   80404 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:10.472235   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:10.522846   80404 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:38:10.522869   80404 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:38:10.522876   80404 kubeadm.go:928] updating node { 192.168.39.147 8443 v1.30.1 crio true true} ...
	I0612 21:38:10.523007   80404 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-591460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:10.523163   80404 ssh_runner.go:195] Run: crio config
	I0612 21:38:10.577165   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:10.577193   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:10.577209   80404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:10.577244   80404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-591460 NodeName:embed-certs-591460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:38:10.577400   80404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-591460"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:10.577479   80404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:38:10.587499   80404 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:10.587573   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:10.597410   80404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0612 21:38:10.614617   80404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:10.632222   80404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0612 21:38:10.649693   80404 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:10.653639   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:10.666501   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:10.802679   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:10.820975   80404 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460 for IP: 192.168.39.147
	I0612 21:38:10.821001   80404 certs.go:194] generating shared ca certs ...
	I0612 21:38:10.821022   80404 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:10.821187   80404 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:10.821233   80404 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:10.821243   80404 certs.go:256] generating profile certs ...
	I0612 21:38:10.821326   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/client.key
	I0612 21:38:10.821402   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key.3b2e21e0
	I0612 21:38:10.821440   80404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key
	I0612 21:38:10.821575   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:10.821616   80404 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:10.821626   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:10.821655   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:10.821706   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:10.821751   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:10.821812   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:10.822621   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:10.879261   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:10.924352   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:10.961294   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:10.993792   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0612 21:38:11.039515   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:11.063161   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:11.086759   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:38:11.109693   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:11.133083   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:11.155716   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:11.181860   80404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:11.199989   80404 ssh_runner.go:195] Run: openssl version
	I0612 21:38:11.205811   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:11.216640   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221692   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221754   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.227829   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:11.239918   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:11.251648   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256123   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256176   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.261880   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:11.273184   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:11.284832   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289679   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289732   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.295338   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:11.306317   80404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:11.310737   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:11.320403   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:11.327756   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:11.333976   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:11.340200   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:11.346386   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:11.352268   80404 kubeadm.go:391] StartCluster: {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:11.352385   80404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:11.352435   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.390802   80404 cri.go:89] found id: ""
	I0612 21:38:11.390870   80404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:11.402604   80404 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:11.402626   80404 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:11.402630   80404 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:11.402682   80404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:11.413636   80404 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:11.414999   80404 kubeconfig.go:125] found "embed-certs-591460" server: "https://192.168.39.147:8443"
	I0612 21:38:11.417654   80404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:11.427456   80404 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.147
	I0612 21:38:11.427496   80404 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:11.427509   80404 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:11.427559   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.462135   80404 cri.go:89] found id: ""
	I0612 21:38:11.462211   80404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:11.478193   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:11.488816   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:11.488838   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:11.488899   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:11.498079   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:11.498154   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:11.508044   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:11.519721   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:11.519785   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:11.529554   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.538699   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:11.538750   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.548154   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:11.559980   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:11.560053   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:11.569737   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:11.579812   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:11.703454   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.773142   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069644541s)
	I0612 21:38:12.773183   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.991458   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.080268   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.207751   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:13.207934   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:13.708672   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.208389   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.268408   80404 api_server.go:72] duration metric: took 1.060631955s to wait for apiserver process to appear ...
	I0612 21:38:14.268443   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:38:14.268464   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:14.269096   80404 api_server.go:269] stopped: https://192.168.39.147:8443/healthz: Get "https://192.168.39.147:8443/healthz": dial tcp 192.168.39.147:8443: connect: connection refused
	I0612 21:38:10.445507   80243 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.445530   80243 pod_ready.go:81] duration metric: took 2.50760731s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.445542   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450290   80243 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.450310   80243 pod_ready.go:81] duration metric: took 4.759656ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450323   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454909   80243 pod_ready.go:92] pod "kube-proxy-8lrgv" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.454940   80243 pod_ready.go:81] duration metric: took 4.597123ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454951   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:12.587416   80243 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:13.505858   80243 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:13.505884   80243 pod_ready.go:81] duration metric: took 3.050925673s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:13.505896   80243 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:14.206281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:14.206781   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:14.206810   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:14.206716   81824 retry.go:31] will retry after 2.057638604s: waiting for machine to come up
	I0612 21:38:16.266372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:16.266920   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:16.266955   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:16.266858   81824 retry.go:31] will retry after 2.387834661s: waiting for machine to come up
	I0612 21:38:14.769114   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.056504   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.056539   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.056557   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.075356   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.075391   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.268731   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.277080   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.277111   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:17.768638   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.773438   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.773464   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:18.269037   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:18.273939   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:38:18.286895   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:38:18.286922   80404 api_server.go:131] duration metric: took 4.018473342s to wait for apiserver health ...
	I0612 21:38:18.286931   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:18.286937   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:18.288955   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:38:18.290619   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:38:18.305334   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:38:18.336590   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:38:18.351276   80404 system_pods.go:59] 8 kube-system pods found
	I0612 21:38:18.351320   80404 system_pods.go:61] "coredns-7db6d8ff4d-z99cq" [575689b8-3c51-45c8-874c-481e4b9db39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:38:18.351331   80404 system_pods.go:61] "etcd-embed-certs-591460" [190c1552-6bca-41f2-9ea9-e415e1ae9406] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:38:18.351342   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [c0fed28f-1d80-44eb-a66a-3a5b36704882] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:38:18.351350   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [79758f2a-2517-4a76-a3ae-536ac3adf781] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:38:18.351357   80404 system_pods.go:61] "kube-proxy-79kz5" [74ddb284-7cb2-46ec-ab9f-246dbfa0c4ec] Running
	I0612 21:38:18.351372   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [d9916521-fcc1-4bf1-8b03-8a5553f07bd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:38:18.351383   80404 system_pods.go:61] "metrics-server-569cc877fc-bkhxn" [f78482c8-82ea-4dbd-999f-2e4c73c98b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:38:18.351396   80404 system_pods.go:61] "storage-provisioner" [b3b117f7-ac44-4430-afb4-c6991ce1b71d] Running
	I0612 21:38:18.351407   80404 system_pods.go:74] duration metric: took 14.792966ms to wait for pod list to return data ...
	I0612 21:38:18.351419   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:38:18.357736   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:38:18.357769   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:38:18.357786   80404 node_conditions.go:105] duration metric: took 6.360028ms to run NodePressure ...
	I0612 21:38:18.357805   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:18.634312   80404 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638679   80404 kubeadm.go:733] kubelet initialised
	I0612 21:38:18.638700   80404 kubeadm.go:734] duration metric: took 4.362243ms waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638706   80404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:18.643840   80404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.648561   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648585   80404 pod_ready.go:81] duration metric: took 4.721795ms for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.648597   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648606   80404 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.654013   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654036   80404 pod_ready.go:81] duration metric: took 5.419602ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.654046   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654054   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.659445   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659468   80404 pod_ready.go:81] duration metric: took 5.404211ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.659479   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659487   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.741451   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741480   80404 pod_ready.go:81] duration metric: took 81.981354ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.741489   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741495   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140710   80404 pod_ready.go:92] pod "kube-proxy-79kz5" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:19.140734   80404 pod_ready.go:81] duration metric: took 399.230349ms for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140744   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:15.513300   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.013924   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:20.024841   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.656575   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:18.657074   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:18.657111   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:18.657022   81824 retry.go:31] will retry after 3.518256927s: waiting for machine to come up
	I0612 21:38:22.176416   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176901   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has current primary IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176930   80762 main.go:141] libmachine: (old-k8s-version-983302) Found IP for machine: 192.168.50.81
	I0612 21:38:22.176965   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserving static IP address...
	I0612 21:38:22.177385   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserved static IP address: 192.168.50.81
	I0612 21:38:22.177422   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.177435   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting for SSH to be available...
	I0612 21:38:22.177459   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | skip adding static IP to network mk-old-k8s-version-983302 - found existing host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"}
	I0612 21:38:22.177471   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Getting to WaitForSSH function...
	I0612 21:38:22.179728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180130   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.180158   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180273   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH client type: external
	I0612 21:38:22.180334   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa (-rw-------)
	I0612 21:38:22.180368   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:22.180387   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | About to run SSH command:
	I0612 21:38:22.180399   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | exit 0
	I0612 21:38:22.308621   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:22.308979   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetConfigRaw
	I0612 21:38:22.309620   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.312747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313124   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.313155   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313421   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:38:22.313635   80762 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:22.313658   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:22.313884   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.316476   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.316961   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.317014   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.317221   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.317408   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317600   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317775   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.317955   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.318195   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.318207   80762 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:22.431693   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:22.431728   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.431979   80762 buildroot.go:166] provisioning hostname "old-k8s-version-983302"
	I0612 21:38:22.432006   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.432191   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.434830   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435267   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.435300   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435431   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.435598   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435718   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435826   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.436056   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.436237   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.436252   80762 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-983302 && echo "old-k8s-version-983302" | sudo tee /etc/hostname
	I0612 21:38:22.563119   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-983302
	
	I0612 21:38:22.563184   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.565915   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.566315   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566513   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.566704   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.566885   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.567021   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.567243   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.567463   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.567490   80762 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-983302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-983302/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-983302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:22.690443   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:22.690474   80762 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:22.690494   80762 buildroot.go:174] setting up certificates
	I0612 21:38:22.690504   80762 provision.go:84] configureAuth start
	I0612 21:38:22.690514   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.690774   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.693186   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693528   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.693576   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693689   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.695948   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696285   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.696318   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696432   80762 provision.go:143] copyHostCerts
	I0612 21:38:22.696501   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:22.696521   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:22.696583   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:22.696662   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:22.696671   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:22.696693   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:22.696774   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:22.696784   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:22.696803   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:22.696847   80762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-983302 san=[127.0.0.1 192.168.50.81 localhost minikube old-k8s-version-983302]
	I0612 21:38:23.576378   80157 start.go:364] duration metric: took 53.730674695s to acquireMachinesLock for "no-preload-087875"
	I0612 21:38:23.576429   80157 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:23.576436   80157 fix.go:54] fixHost starting: 
	I0612 21:38:23.576844   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:23.576875   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:23.594879   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I0612 21:38:23.595284   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:23.595811   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:38:23.595836   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:23.596201   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:23.596404   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:23.596559   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:38:23.598372   80157 fix.go:112] recreateIfNeeded on no-preload-087875: state=Stopped err=<nil>
	I0612 21:38:23.598399   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	W0612 21:38:23.598558   80157 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:23.600649   80157 out.go:177] * Restarting existing kvm2 VM for "no-preload-087875" ...
	I0612 21:38:21.147354   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.147393   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:22.863618   80762 provision.go:177] copyRemoteCerts
	I0612 21:38:22.863672   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:22.863698   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.866979   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867371   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.867403   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867548   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.867734   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.867904   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.868126   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:22.958350   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:38:22.984409   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:23.009623   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0612 21:38:23.038026   80762 provision.go:87] duration metric: took 347.510898ms to configureAuth
	I0612 21:38:23.038063   80762 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:23.038309   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:38:23.038390   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.041196   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041634   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.041660   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041842   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.042044   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042222   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042410   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.042580   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.042780   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.042799   80762 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:23.324862   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:23.324893   80762 machine.go:97] duration metric: took 1.01124225s to provisionDockerMachine
	I0612 21:38:23.324904   80762 start.go:293] postStartSetup for "old-k8s-version-983302" (driver="kvm2")
	I0612 21:38:23.324913   80762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:23.324928   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.325240   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:23.325274   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.328007   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328343   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.328372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328578   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.328770   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.328939   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.329068   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.416040   80762 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:23.420586   80762 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:23.420607   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:23.420674   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:23.420739   80762 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:23.420823   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:23.432266   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:23.460619   80762 start.go:296] duration metric: took 135.703593ms for postStartSetup
	I0612 21:38:23.460661   80762 fix.go:56] duration metric: took 18.536593686s for fixHost
	I0612 21:38:23.460684   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.463415   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463745   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.463780   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463909   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.464110   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464248   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464378   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.464533   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.464742   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.464754   80762 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:23.576211   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228303.539451044
	
	I0612 21:38:23.576231   80762 fix.go:216] guest clock: 1718228303.539451044
	I0612 21:38:23.576239   80762 fix.go:229] Guest: 2024-06-12 21:38:23.539451044 +0000 UTC Remote: 2024-06-12 21:38:23.460665921 +0000 UTC m=+270.637213069 (delta=78.785123ms)
	I0612 21:38:23.576285   80762 fix.go:200] guest clock delta is within tolerance: 78.785123ms
	I0612 21:38:23.576291   80762 start.go:83] releasing machines lock for "old-k8s-version-983302", held for 18.65227368s
	I0612 21:38:23.576316   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.576617   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:23.579493   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.579881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.579913   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.580120   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580693   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580865   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580952   80762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:23.581005   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.581111   80762 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:23.581141   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.584053   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584262   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584443   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584479   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584587   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584690   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584757   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.584855   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584918   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.584980   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.585067   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.585115   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.585227   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.666055   80762 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:23.688409   80762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:23.848030   80762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:23.855302   80762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:23.855383   80762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:23.874362   80762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:23.874389   80762 start.go:494] detecting cgroup driver to use...
	I0612 21:38:23.874461   80762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:23.893239   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:23.909774   80762 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:23.909844   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:23.926084   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:23.943341   80762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:24.072731   80762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:24.244551   80762 docker.go:233] disabling docker service ...
	I0612 21:38:24.244624   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:24.261862   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:24.277051   80762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:24.426146   80762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:24.560634   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:24.575339   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:24.595965   80762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0612 21:38:24.596043   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.607814   80762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:24.607892   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.619001   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.630982   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.644326   80762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:24.658640   80762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:24.673944   80762 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:24.673994   80762 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:24.693853   80762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:24.709251   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:24.856222   80762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:25.023760   80762 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:25.023842   80762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:25.029449   80762 start.go:562] Will wait 60s for crictl version
	I0612 21:38:25.029522   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:25.033750   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:25.080911   80762 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:25.081018   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.111727   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.145999   80762 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0612 21:38:22.512748   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:24.515486   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.602119   80157 main.go:141] libmachine: (no-preload-087875) Calling .Start
	I0612 21:38:23.602319   80157 main.go:141] libmachine: (no-preload-087875) Ensuring networks are active...
	I0612 21:38:23.603167   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network default is active
	I0612 21:38:23.603533   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network mk-no-preload-087875 is active
	I0612 21:38:23.603887   80157 main.go:141] libmachine: (no-preload-087875) Getting domain xml...
	I0612 21:38:23.604617   80157 main.go:141] libmachine: (no-preload-087875) Creating domain...
	I0612 21:38:24.978550   80157 main.go:141] libmachine: (no-preload-087875) Waiting to get IP...
	I0612 21:38:24.979551   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:24.979945   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:24.980007   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:24.979925   81986 retry.go:31] will retry after 224.557195ms: waiting for machine to come up
	I0612 21:38:25.206441   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.206928   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.206957   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.206875   81986 retry.go:31] will retry after 361.682908ms: waiting for machine to come up
	I0612 21:38:25.570564   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.571139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.571184   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.571089   81986 retry.go:31] will retry after 328.335873ms: waiting for machine to come up
	I0612 21:38:25.901471   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.902020   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.902054   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.901953   81986 retry.go:31] will retry after 505.408325ms: waiting for machine to come up
	I0612 21:38:26.408636   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:26.409139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:26.409167   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:26.409091   81986 retry.go:31] will retry after 749.519426ms: waiting for machine to come up
	I0612 21:38:27.160100   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.160563   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.160611   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.160537   81986 retry.go:31] will retry after 641.037463ms: waiting for machine to come up
	I0612 21:38:25.147420   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:25.151029   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151402   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:25.151432   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151726   80762 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:25.156561   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:25.171243   80762 kubeadm.go:877] updating cluster {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:25.171386   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:38:25.171429   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:25.225872   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:25.225936   80762 ssh_runner.go:195] Run: which lz4
	I0612 21:38:25.230447   80762 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:25.235452   80762 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:25.235485   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0612 21:38:27.033962   80762 crio.go:462] duration metric: took 1.803565745s to copy over tarball
	I0612 21:38:27.034045   80762 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:25.149629   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.651785   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:26.516743   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:29.013751   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.803722   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.804278   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.804316   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.804252   81986 retry.go:31] will retry after 1.184505978s: waiting for machine to come up
	I0612 21:38:28.990221   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:28.990736   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:28.990763   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:28.990709   81986 retry.go:31] will retry after 1.061139219s: waiting for machine to come up
	I0612 21:38:30.054187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:30.054768   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:30.054805   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:30.054718   81986 retry.go:31] will retry after 1.621121981s: waiting for machine to come up
	I0612 21:38:31.677355   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:31.677938   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:31.677966   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:31.677890   81986 retry.go:31] will retry after 2.17746309s: waiting for machine to come up
	I0612 21:38:30.212028   80762 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177947965s)
	I0612 21:38:30.212073   80762 crio.go:469] duration metric: took 3.178080815s to extract the tarball
	I0612 21:38:30.212085   80762 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:30.256957   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:30.297891   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:30.297917   80762 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:30.298025   80762 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.298045   80762 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.298055   80762 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.298021   80762 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0612 21:38:30.298106   80762 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.298062   80762 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.298004   80762 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.298079   80762 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0612 21:38:30.299842   80762 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.299848   80762 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.299843   80762 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.299866   80762 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.299876   80762 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.299905   80762 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.466739   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0612 21:38:30.516078   80762 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0612 21:38:30.516127   80762 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0612 21:38:30.516174   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.520362   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0612 21:38:30.545437   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.563320   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0612 21:38:30.599110   80762 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0612 21:38:30.599155   80762 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.599217   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.603578   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.639450   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0612 21:38:30.649462   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.650602   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.652555   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.656970   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.672136   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.766185   80762 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0612 21:38:30.766233   80762 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.766279   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.778901   80762 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0612 21:38:30.778946   80762 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.778952   80762 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0612 21:38:30.778983   80762 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.778994   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.779041   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.793610   80762 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0612 21:38:30.793650   80762 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.793698   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.807451   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.807482   80762 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0612 21:38:30.807518   80762 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.807458   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.807518   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.807557   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.807559   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.916470   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0612 21:38:30.916564   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0612 21:38:30.916576   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0612 21:38:30.916603   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0612 21:38:30.916646   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.953152   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0612 21:38:31.194046   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:31.341827   80762 cache_images.go:92] duration metric: took 1.043891497s to LoadCachedImages
	W0612 21:38:31.341922   80762 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0612 21:38:31.341937   80762 kubeadm.go:928] updating node { 192.168.50.81 8443 v1.20.0 crio true true} ...
	I0612 21:38:31.342064   80762 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-983302 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:31.342154   80762 ssh_runner.go:195] Run: crio config
	I0612 21:38:31.395673   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:38:31.395706   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:31.395722   80762 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:31.395744   80762 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-983302 NodeName:old-k8s-version-983302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0612 21:38:31.395918   80762 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-983302"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:31.395995   80762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0612 21:38:31.410706   80762 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:31.410785   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:31.425161   80762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0612 21:38:31.445883   80762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:31.463605   80762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0612 21:38:31.482797   80762 ssh_runner.go:195] Run: grep 192.168.50.81	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:31.486974   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:31.499681   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:31.645490   80762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:31.668769   80762 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302 for IP: 192.168.50.81
	I0612 21:38:31.668797   80762 certs.go:194] generating shared ca certs ...
	I0612 21:38:31.668820   80762 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:31.668987   80762 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:31.669061   80762 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:31.669088   80762 certs.go:256] generating profile certs ...
	I0612 21:38:31.669212   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.key
	I0612 21:38:31.669309   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key.1098c83c
	I0612 21:38:31.669373   80762 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key
	I0612 21:38:31.669548   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:31.669598   80762 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:31.669613   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:31.669662   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:31.669723   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:31.669759   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:31.669830   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:31.670835   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:31.717330   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:31.754900   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:31.798099   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:31.839647   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0612 21:38:31.883454   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:31.920765   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:31.953069   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0612 21:38:31.978134   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:32.002475   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:32.027784   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:32.053563   80762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:32.074493   80762 ssh_runner.go:195] Run: openssl version
	I0612 21:38:32.080620   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:32.093531   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098615   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098688   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.104777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:32.116551   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:32.130188   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135197   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135279   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.142777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:32.156051   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:32.169866   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175249   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175340   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.181561   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:32.193430   80762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:32.198235   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:32.204654   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:32.210771   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:32.216966   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:32.223203   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:32.230990   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:32.237290   80762 kubeadm.go:391] StartCluster: {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:32.237446   80762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:32.237503   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.282436   80762 cri.go:89] found id: ""
	I0612 21:38:32.282516   80762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:32.295283   80762 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:32.295313   80762 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:32.295321   80762 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:32.295400   80762 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:32.307483   80762 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:32.308555   80762 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-983302" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:32.309335   80762 kubeconfig.go:62] /home/jenkins/minikube-integration/17779-14199/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-983302" cluster setting kubeconfig missing "old-k8s-version-983302" context setting]
	I0612 21:38:32.310486   80762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:32.397524   80762 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:32.411765   80762 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.81
	I0612 21:38:32.411797   80762 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:32.411807   80762 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:32.411849   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.460009   80762 cri.go:89] found id: ""
	I0612 21:38:32.460078   80762 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:32.481670   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:32.493664   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:32.493684   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:32.493734   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:32.503974   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:32.504044   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:32.515971   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:32.525772   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:32.525832   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:32.537137   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.548539   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:32.548600   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.560401   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:32.570608   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:32.570681   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:32.582763   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:32.594407   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:32.734633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:30.151681   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.658859   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:31.658881   80404 pod_ready.go:81] duration metric: took 12.518130926s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:31.658890   80404 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:33.666360   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.357093   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.857141   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:33.857675   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:33.857702   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:33.857648   81986 retry.go:31] will retry after 2.485654549s: waiting for machine to come up
	I0612 21:38:36.344611   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:36.345117   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:36.345148   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:36.345075   81986 retry.go:31] will retry after 3.560063035s: waiting for machine to come up
	I0612 21:38:33.526337   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.768139   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.896716   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.986708   80762 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:33.986832   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.487194   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.987580   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.486966   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.487534   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.987526   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:37.487035   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.669161   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:35.513787   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.011903   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:39.907588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:39.908051   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:39.908110   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:39.907994   81986 retry.go:31] will retry after 4.524521166s: waiting for machine to come up
	I0612 21:38:37.986904   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.487262   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.986907   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.486895   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.987060   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.487385   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.987049   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.487325   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.987550   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:42.487225   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.665078   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.665731   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.666653   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:40.512741   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.513175   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:45.013451   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.434330   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434850   80157 main.go:141] libmachine: (no-preload-087875) Found IP for machine: 192.168.72.63
	I0612 21:38:44.434883   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has current primary IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434893   80157 main.go:141] libmachine: (no-preload-087875) Reserving static IP address...
	I0612 21:38:44.435324   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.435358   80157 main.go:141] libmachine: (no-preload-087875) Reserved static IP address: 192.168.72.63
	I0612 21:38:44.435378   80157 main.go:141] libmachine: (no-preload-087875) DBG | skip adding static IP to network mk-no-preload-087875 - found existing host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"}
	I0612 21:38:44.435388   80157 main.go:141] libmachine: (no-preload-087875) Waiting for SSH to be available...
	I0612 21:38:44.435397   80157 main.go:141] libmachine: (no-preload-087875) DBG | Getting to WaitForSSH function...
	I0612 21:38:44.437881   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438196   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.438218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438385   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH client type: external
	I0612 21:38:44.438414   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa (-rw-------)
	I0612 21:38:44.438452   80157 main.go:141] libmachine: (no-preload-087875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:44.438469   80157 main.go:141] libmachine: (no-preload-087875) DBG | About to run SSH command:
	I0612 21:38:44.438489   80157 main.go:141] libmachine: (no-preload-087875) DBG | exit 0
	I0612 21:38:44.571149   80157 main.go:141] libmachine: (no-preload-087875) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:44.571499   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetConfigRaw
	I0612 21:38:44.572172   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.574754   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575142   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.575187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575406   80157 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/config.json ...
	I0612 21:38:44.575580   80157 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:44.575595   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:44.575825   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.578584   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579008   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.579030   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579214   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.579394   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579534   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.579924   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.580096   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.580109   80157 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:44.691573   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:44.691609   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.691890   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:38:44.691914   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.692120   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.695218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695697   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.695729   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695783   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.695986   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696200   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696383   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.696572   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.696776   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.696794   80157 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-087875 && echo "no-preload-087875" | sudo tee /etc/hostname
	I0612 21:38:44.821857   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-087875
	
	I0612 21:38:44.821893   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.824821   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825263   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.825295   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825523   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.825740   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.825912   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.826024   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.826187   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.826406   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.826430   80157 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-087875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-087875/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-087875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:44.948871   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:44.948904   80157 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:44.948930   80157 buildroot.go:174] setting up certificates
	I0612 21:38:44.948941   80157 provision.go:84] configureAuth start
	I0612 21:38:44.948954   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.949247   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.952166   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952511   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.952538   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952662   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.955149   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955483   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.955505   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955658   80157 provision.go:143] copyHostCerts
	I0612 21:38:44.955731   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:44.955743   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:44.955807   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:44.955929   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:44.955942   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:44.955975   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:44.956052   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:44.956059   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:44.956078   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:44.956125   80157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.no-preload-087875 san=[127.0.0.1 192.168.72.63 localhost minikube no-preload-087875]
	I0612 21:38:45.138701   80157 provision.go:177] copyRemoteCerts
	I0612 21:38:45.138758   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:45.138781   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.141540   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142011   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.142055   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142199   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.142457   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.142603   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.142765   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.234480   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:45.259043   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0612 21:38:45.290511   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:45.316377   80157 provision.go:87] duration metric: took 367.423709ms to configureAuth
	I0612 21:38:45.316403   80157 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:45.316607   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:45.316684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.319596   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320160   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.320187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320384   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.320598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320778   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320973   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.321203   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.321368   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.321387   80157 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:45.611478   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:45.611511   80157 machine.go:97] duration metric: took 1.035919707s to provisionDockerMachine
	I0612 21:38:45.611523   80157 start.go:293] postStartSetup for "no-preload-087875" (driver="kvm2")
	I0612 21:38:45.611533   80157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:45.611556   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.611843   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:45.611862   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.615071   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615542   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.615582   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615715   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.615889   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.616028   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.616204   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.707710   80157 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:45.712155   80157 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:45.712177   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:45.712235   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:45.712301   80157 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:45.712386   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:45.722654   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:45.747626   80157 start.go:296] duration metric: took 136.091584ms for postStartSetup
	I0612 21:38:45.747666   80157 fix.go:56] duration metric: took 22.171227252s for fixHost
	I0612 21:38:45.747685   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.750588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.750972   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.750999   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.751231   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.751443   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751773   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.752005   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.752181   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.752195   80157 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:45.864042   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228325.837473906
	
	I0612 21:38:45.864068   80157 fix.go:216] guest clock: 1718228325.837473906
	I0612 21:38:45.864079   80157 fix.go:229] Guest: 2024-06-12 21:38:45.837473906 +0000 UTC Remote: 2024-06-12 21:38:45.747669277 +0000 UTC m=+358.493088442 (delta=89.804629ms)
	I0612 21:38:45.864106   80157 fix.go:200] guest clock delta is within tolerance: 89.804629ms
	I0612 21:38:45.864114   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 22.287706082s
	I0612 21:38:45.864152   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.864448   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:45.867230   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867603   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.867633   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867768   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868293   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868453   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868535   80157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:45.868575   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.868663   80157 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:45.868681   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.871218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871489   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871678   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.871719   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871915   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872061   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.872085   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872109   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.872240   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872246   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872522   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872529   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.872692   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872868   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.953249   80157 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:45.976778   80157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:46.124511   80157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:46.130509   80157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:46.130575   80157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:46.149670   80157 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:46.149691   80157 start.go:494] detecting cgroup driver to use...
	I0612 21:38:46.149755   80157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:46.167865   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:46.182896   80157 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:46.182951   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:46.197058   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:46.211517   80157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:46.331986   80157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:46.500675   80157 docker.go:233] disabling docker service ...
	I0612 21:38:46.500745   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:46.516858   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:46.530617   80157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:46.674917   80157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:46.810090   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:46.825079   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:46.843895   80157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:46.843963   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.854170   80157 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:46.854245   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.864699   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.875057   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.886063   80157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:46.897688   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.908984   80157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.926803   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.939373   80157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:46.948868   80157 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:46.948922   80157 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:46.963593   80157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:46.973735   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:47.108669   80157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:47.249938   80157 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:47.250044   80157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:47.255480   80157 start.go:562] Will wait 60s for crictl version
	I0612 21:38:47.255556   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.259730   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:47.303074   80157 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:47.303187   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.332225   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.363628   80157 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:42.987579   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.487465   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.987265   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.487935   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.987399   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.487793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.986898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.486985   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.986848   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.486947   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.164573   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.165711   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.512195   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.512366   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.365068   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:47.367703   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368079   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:47.368103   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368325   80157 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:47.372608   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:47.386411   80157 kubeadm.go:877] updating cluster {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:47.386750   80157 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:47.386796   80157 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:47.422165   80157 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:47.422189   80157 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:47.422227   80157 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.422280   80157 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.422355   80157 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.422370   80157 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.422311   80157 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.422347   80157 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.422318   80157 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0612 21:38:47.422599   80157 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.423599   80157 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0612 21:38:47.423610   80157 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.423612   80157 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.423630   80157 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.423626   80157 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.423699   80157 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.423737   80157 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.423720   80157 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.556807   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0612 21:38:47.557424   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.561887   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.569402   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.571880   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.576879   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.587848   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.759890   80157 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0612 21:38:47.759926   80157 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.759947   80157 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0612 21:38:47.759973   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.759976   80157 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0612 21:38:47.760006   80157 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.760015   80157 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0612 21:38:47.759977   80157 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.760061   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760063   80157 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.760075   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760073   80157 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0612 21:38:47.760091   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760101   80157 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.760164   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.766878   80157 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0612 21:38:47.766905   80157 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.766943   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.777168   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.777197   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.778459   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.779057   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.882668   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0612 21:38:47.882770   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.902416   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0612 21:38:47.902532   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:47.917388   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0612 21:38:47.917473   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0612 21:38:47.917501   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:47.917528   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:47.917545   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:47.917500   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0612 21:38:47.917558   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917594   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917502   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:47.917559   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0612 21:38:47.929251   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0612 21:38:47.929299   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0612 21:38:47.929308   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0612 21:38:48.312589   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.713720   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1: (2.796151375s)
	I0612 21:38:50.713767   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.796263274s)
	I0612 21:38:50.713901   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.401254109s)
	I0612 21:38:50.713921   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.713966   80157 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0612 21:38:50.713987   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.714017   80157 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.714063   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.987863   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.487299   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.986886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.486972   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.987859   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.487034   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.987724   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.486948   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:52.487668   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.665638   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.665855   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:51.512765   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:54.011870   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.169682   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.455668553s)
	I0612 21:38:53.169705   80157 ssh_runner.go:235] Completed: which crictl: (2.455619981s)
	I0612 21:38:53.169714   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0612 21:38:53.169741   80157 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.169759   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:53.169784   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.216895   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0612 21:38:53.217020   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:38:57.220343   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.050521066s)
	I0612 21:38:57.220376   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0612 21:38:57.220397   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220444   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220443   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (4.003396955s)
	I0612 21:38:57.220487   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0612 21:38:52.987635   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.487500   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.987860   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.487855   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.986868   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.487259   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.987902   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.487535   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.987269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:57.487542   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.166299   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.665085   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:56.012847   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.557142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.682288   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.46182102s)
	I0612 21:38:58.682313   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0612 21:38:58.682337   80157 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:58.682376   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:39:00.576373   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.893964365s)
	I0612 21:39:00.576412   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0612 21:39:00.576443   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:39:00.576504   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:57.987222   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.486976   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.986913   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.487269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.987289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.987690   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.487283   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.987541   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:02.487589   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.667732   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.165317   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:01.012684   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.015111   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:02.445930   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.86940281s)
	I0612 21:39:02.445960   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0612 21:39:02.445994   80157 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:02.446071   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:03.393330   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0612 21:39:03.393375   80157 cache_images.go:123] Successfully loaded all cached images
	I0612 21:39:03.393382   80157 cache_images.go:92] duration metric: took 15.9711807s to LoadCachedImages
	I0612 21:39:03.393397   80157 kubeadm.go:928] updating node { 192.168.72.63 8443 v1.30.1 crio true true} ...
	I0612 21:39:03.393543   80157 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-087875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:39:03.393658   80157 ssh_runner.go:195] Run: crio config
	I0612 21:39:03.448859   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:03.448884   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:03.448901   80157 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:39:03.448930   80157 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.63 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-087875 NodeName:no-preload-087875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:39:03.449103   80157 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-087875"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:39:03.449181   80157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:39:03.462756   80157 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:39:03.462825   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:39:03.472653   80157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0612 21:39:03.491567   80157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:39:03.509239   80157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0612 21:39:03.527802   80157 ssh_runner.go:195] Run: grep 192.168.72.63	control-plane.minikube.internal$ /etc/hosts
	I0612 21:39:03.531523   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:39:03.543748   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:39:03.666376   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:39:03.683563   80157 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875 for IP: 192.168.72.63
	I0612 21:39:03.683587   80157 certs.go:194] generating shared ca certs ...
	I0612 21:39:03.683606   80157 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:39:03.683766   80157 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:39:03.683816   80157 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:39:03.683831   80157 certs.go:256] generating profile certs ...
	I0612 21:39:03.683927   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/client.key
	I0612 21:39:03.684010   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key.13709275
	I0612 21:39:03.684066   80157 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key
	I0612 21:39:03.684217   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:39:03.684259   80157 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:39:03.684272   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:39:03.684318   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:39:03.684364   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:39:03.684395   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:39:03.684455   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:39:03.685098   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:39:03.732817   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:39:03.771449   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:39:03.800774   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:39:03.831845   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 21:39:03.862000   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 21:39:03.901036   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:39:03.925025   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:39:03.950862   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:39:03.974222   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:39:04.002698   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:39:04.028173   80157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:39:04.044685   80157 ssh_runner.go:195] Run: openssl version
	I0612 21:39:04.050600   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:39:04.061893   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066371   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066424   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.072463   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:39:04.083929   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:39:04.094777   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099380   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099435   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.105125   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:39:04.116191   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:39:04.127408   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132234   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132315   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.138401   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:39:04.149542   80157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:39:04.154133   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:39:04.160171   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:39:04.166410   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:39:04.172650   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:39:04.178506   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:39:04.184375   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:39:04.190412   80157 kubeadm.go:391] StartCluster: {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:39:04.190524   80157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:39:04.190584   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.235297   80157 cri.go:89] found id: ""
	I0612 21:39:04.235362   80157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:39:04.246400   80157 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:39:04.246429   80157 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:39:04.246449   80157 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:39:04.246499   80157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:39:04.257137   80157 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:39:04.258277   80157 kubeconfig.go:125] found "no-preload-087875" server: "https://192.168.72.63:8443"
	I0612 21:39:04.260656   80157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:39:04.270637   80157 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.63
	I0612 21:39:04.270666   80157 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:39:04.270675   80157 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:39:04.270730   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.316487   80157 cri.go:89] found id: ""
	I0612 21:39:04.316550   80157 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:39:04.334814   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:39:04.346430   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:39:04.346451   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:39:04.346500   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:39:04.356362   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:39:04.356417   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:39:04.366999   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:39:04.378005   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:39:04.378061   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:39:04.388052   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.397130   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:39:04.397185   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.407053   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:39:04.416338   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:39:04.416395   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:39:04.426475   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:39:04.436852   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:04.565452   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.461610   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.676493   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.767236   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.870855   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:39:05.870960   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.372034   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.871680   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.906242   80157 api_server.go:72] duration metric: took 1.035387498s to wait for apiserver process to appear ...
	I0612 21:39:06.906273   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:39:06.906296   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:06.906883   80157 api_server.go:269] stopped: https://192.168.72.63:8443/healthz: Get "https://192.168.72.63:8443/healthz": dial tcp 192.168.72.63:8443: connect: connection refused
	I0612 21:39:02.987853   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.487382   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.987303   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.487852   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.987464   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.486928   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.987660   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:07.487497   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.166502   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.665452   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:09.665766   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:05.512792   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:08.012392   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:10.014073   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.407227   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.589285   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:39:09.589319   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:39:09.589336   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.726716   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.726753   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:09.907032   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.917718   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.917746   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.406997   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.412127   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:10.412156   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.906700   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.911262   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:39:10.918778   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:39:10.918813   80157 api_server.go:131] duration metric: took 4.012531107s to wait for apiserver health ...
	I0612 21:39:10.918824   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:10.918832   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:10.921012   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:39:10.922401   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:39:10.948209   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:39:10.974530   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:39:10.986054   80157 system_pods.go:59] 8 kube-system pods found
	I0612 21:39:10.986091   80157 system_pods.go:61] "coredns-7db6d8ff4d-sh68b" [17691219-bfda-443b-8049-e6e966aadb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:39:10.986102   80157 system_pods.go:61] "etcd-no-preload-087875" [3048b12a-4354-45fd-99c7-d2a84035e102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:39:10.986114   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [0f39a5fd-1a64-479f-bb28-c19bc10b7ed3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:39:10.986127   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [62cc49b8-b05f-4371-aa17-bea17d08d2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:39:10.986141   80157 system_pods.go:61] "kube-proxy-htv9h" [e3eb4693-7896-4dd2-98b8-91f06b028a1e] Running
	I0612 21:39:10.986158   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [ef833b9d-75ca-43bd-b196-30594775b174] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:39:10.986170   80157 system_pods.go:61] "metrics-server-569cc877fc-d5mj6" [79ba2aad-c942-4162-b69a-5c7dd138a618] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:39:10.986178   80157 system_pods.go:61] "storage-provisioner" [5793c778-1a5c-4cfe-924a-b85b72df53cd] Running
	I0612 21:39:10.986187   80157 system_pods.go:74] duration metric: took 11.634011ms to wait for pod list to return data ...
	I0612 21:39:10.986199   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:39:10.992801   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:39:10.992843   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:39:10.992856   80157 node_conditions.go:105] duration metric: took 6.648025ms to run NodePressure ...
	I0612 21:39:10.992878   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:11.263413   80157 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271758   80157 kubeadm.go:733] kubelet initialised
	I0612 21:39:11.271781   80157 kubeadm.go:734] duration metric: took 8.347232ms waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271789   80157 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:39:11.277940   80157 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:07.987732   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.486974   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.486941   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.986929   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.487754   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.987685   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.486910   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.987457   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.486873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.165604   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.166986   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.029928   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.512085   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:13.287555   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:15.786345   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.987394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.486915   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.987880   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.486881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.986951   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.487462   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.986850   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.487213   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.987066   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:17.487882   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.666123   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.666354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:16.512936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:19.013463   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.285110   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:20.788396   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.284869   80157 pod_ready.go:92] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:21.284902   80157 pod_ready.go:81] duration metric: took 10.006929439s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:21.284916   80157 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:17.987273   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.987836   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.487622   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.987381   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.487005   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.987638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.487670   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.987552   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:22.487438   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.665272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.512836   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:24.014108   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.291502   80157 pod_ready.go:102] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:25.791813   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.791842   80157 pod_ready.go:81] duration metric: took 4.506916362s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.791854   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796901   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.796928   80157 pod_ready.go:81] duration metric: took 5.066599ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796939   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801550   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.801571   80157 pod_ready.go:81] duration metric: took 4.624771ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801580   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806178   80157 pod_ready.go:92] pod "kube-proxy-htv9h" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.806195   80157 pod_ready.go:81] duration metric: took 4.609956ms for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806204   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809883   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.809902   80157 pod_ready.go:81] duration metric: took 3.691999ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809914   80157 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:22.987165   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.487122   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.987804   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.487583   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.987647   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.487126   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.987251   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.987044   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:27.486911   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.668272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:28.164809   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:26.513220   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:29.013047   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.817352   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:30.315600   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.487496   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.987166   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.487892   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.987787   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.487315   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.987933   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.487255   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:32.487881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.165900   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.167795   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.665939   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:31.013473   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:33.015281   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.316680   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.317063   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:36.816905   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.987267   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.487678   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.987296   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:33.987371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:34.028670   80762 cri.go:89] found id: ""
	I0612 21:39:34.028699   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.028710   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:34.028717   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:34.028778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:34.068371   80762 cri.go:89] found id: ""
	I0612 21:39:34.068400   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.068412   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:34.068419   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:34.068485   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:34.104605   80762 cri.go:89] found id: ""
	I0612 21:39:34.104634   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.104643   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:34.104650   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:34.104745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:34.150301   80762 cri.go:89] found id: ""
	I0612 21:39:34.150327   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.150335   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:34.150341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:34.150396   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:34.191426   80762 cri.go:89] found id: ""
	I0612 21:39:34.191462   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.191475   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:34.191484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:34.191562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:34.228483   80762 cri.go:89] found id: ""
	I0612 21:39:34.228523   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.228535   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:34.228543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:34.228653   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:34.262834   80762 cri.go:89] found id: ""
	I0612 21:39:34.262863   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.262873   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:34.262881   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:34.262944   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:34.298283   80762 cri.go:89] found id: ""
	I0612 21:39:34.298312   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.298321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:34.298330   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:34.298340   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:34.350889   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:34.350918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:34.365264   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:34.365289   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:34.508130   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:34.508162   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:34.508180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:34.572036   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:34.572076   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.114371   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:37.127410   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:37.127492   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:37.168684   80762 cri.go:89] found id: ""
	I0612 21:39:37.168705   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.168714   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:37.168723   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:37.168798   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:37.208765   80762 cri.go:89] found id: ""
	I0612 21:39:37.208797   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.208808   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:37.208815   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:37.208875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:37.266245   80762 cri.go:89] found id: ""
	I0612 21:39:37.266270   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.266277   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:37.266283   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:37.266331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:37.313557   80762 cri.go:89] found id: ""
	I0612 21:39:37.313586   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.313597   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:37.313606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:37.313677   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:37.353292   80762 cri.go:89] found id: ""
	I0612 21:39:37.353318   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.353325   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:37.353332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:37.353389   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:37.391940   80762 cri.go:89] found id: ""
	I0612 21:39:37.391974   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.391984   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:37.392015   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:37.392078   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:37.432133   80762 cri.go:89] found id: ""
	I0612 21:39:37.432154   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.432166   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:37.432174   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:37.432228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:37.468274   80762 cri.go:89] found id: ""
	I0612 21:39:37.468302   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.468310   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:37.468328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:37.468347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:37.543904   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:37.543941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.586957   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:37.586982   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:37.641247   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:37.641288   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:37.657076   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:37.657101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:37.729279   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:37.165427   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.166383   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:35.512174   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:37.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.012806   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.317119   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:41.817268   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.229638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:40.243825   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:40.243889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:40.282795   80762 cri.go:89] found id: ""
	I0612 21:39:40.282821   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.282829   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:40.282834   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:40.282879   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:40.320211   80762 cri.go:89] found id: ""
	I0612 21:39:40.320236   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.320246   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:40.320252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:40.320338   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:40.356270   80762 cri.go:89] found id: ""
	I0612 21:39:40.356292   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.356300   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:40.356306   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:40.356353   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:40.394667   80762 cri.go:89] found id: ""
	I0612 21:39:40.394691   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.394699   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:40.394704   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:40.394751   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:40.432765   80762 cri.go:89] found id: ""
	I0612 21:39:40.432794   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.432804   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:40.432811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:40.432883   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:40.472347   80762 cri.go:89] found id: ""
	I0612 21:39:40.472386   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.472406   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:40.472414   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:40.472477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:40.508414   80762 cri.go:89] found id: ""
	I0612 21:39:40.508445   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.508456   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:40.508464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:40.508521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:40.546938   80762 cri.go:89] found id: ""
	I0612 21:39:40.546964   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.546972   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:40.546981   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:40.546993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:40.621356   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:40.621380   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:40.621398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:40.703830   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:40.703865   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:40.744915   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:40.744965   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:40.798883   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:40.798920   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:41.167469   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.667403   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:42.512351   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.512639   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.317053   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.317350   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.315905   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:43.330150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:43.330221   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:43.377307   80762 cri.go:89] found id: ""
	I0612 21:39:43.377337   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.377347   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:43.377362   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:43.377426   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:43.412608   80762 cri.go:89] found id: ""
	I0612 21:39:43.412638   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.412648   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:43.412654   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:43.412718   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:43.446716   80762 cri.go:89] found id: ""
	I0612 21:39:43.446746   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.446755   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:43.446762   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:43.446823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:43.484607   80762 cri.go:89] found id: ""
	I0612 21:39:43.484636   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.484647   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:43.484655   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:43.484700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:43.522400   80762 cri.go:89] found id: ""
	I0612 21:39:43.522427   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.522438   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:43.522445   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:43.522529   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:43.559121   80762 cri.go:89] found id: ""
	I0612 21:39:43.559147   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.559163   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:43.559211   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:43.559292   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:43.595886   80762 cri.go:89] found id: ""
	I0612 21:39:43.595919   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.595937   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:43.595945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:43.596011   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:43.638549   80762 cri.go:89] found id: ""
	I0612 21:39:43.638573   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.638583   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:43.638594   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:43.638609   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:43.705300   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:43.705338   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:43.723246   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:43.723281   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:43.807735   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:43.807760   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:43.807870   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:43.882971   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:43.883017   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.421476   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:46.434447   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:46.434532   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:46.470710   80762 cri.go:89] found id: ""
	I0612 21:39:46.470745   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.470758   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:46.470765   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:46.470828   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:46.504843   80762 cri.go:89] found id: ""
	I0612 21:39:46.504871   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.504878   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:46.504884   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:46.504941   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:46.542937   80762 cri.go:89] found id: ""
	I0612 21:39:46.542965   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.542973   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:46.542979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:46.543035   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:46.581098   80762 cri.go:89] found id: ""
	I0612 21:39:46.581124   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.581133   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:46.581143   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:46.581189   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:46.617289   80762 cri.go:89] found id: ""
	I0612 21:39:46.617319   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.617329   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:46.617337   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:46.617402   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:46.651012   80762 cri.go:89] found id: ""
	I0612 21:39:46.651045   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.651057   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:46.651070   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:46.651141   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:46.688344   80762 cri.go:89] found id: ""
	I0612 21:39:46.688370   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.688379   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:46.688388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:46.688451   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:46.724349   80762 cri.go:89] found id: ""
	I0612 21:39:46.724374   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.724382   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:46.724390   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:46.724404   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:46.797866   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:46.797894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:46.797912   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:46.887520   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:46.887557   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.928143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:46.928182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:46.981416   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:46.981451   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:46.164845   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.166925   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.513519   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.016041   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.816335   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:50.816407   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.497028   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:49.510077   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:49.510147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:49.544313   80762 cri.go:89] found id: ""
	I0612 21:39:49.544349   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.544359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:49.544365   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:49.544416   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:49.580220   80762 cri.go:89] found id: ""
	I0612 21:39:49.580248   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.580256   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:49.580262   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:49.580316   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:49.619582   80762 cri.go:89] found id: ""
	I0612 21:39:49.619607   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.619615   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:49.619620   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:49.619692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:49.656453   80762 cri.go:89] found id: ""
	I0612 21:39:49.656479   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.656487   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:49.656493   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:49.656557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:49.694285   80762 cri.go:89] found id: ""
	I0612 21:39:49.694318   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.694330   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:49.694338   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:49.694417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:49.731100   80762 cri.go:89] found id: ""
	I0612 21:39:49.731127   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.731135   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:49.731140   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:49.731209   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:49.767709   80762 cri.go:89] found id: ""
	I0612 21:39:49.767731   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.767738   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:49.767744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:49.767787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:49.801231   80762 cri.go:89] found id: ""
	I0612 21:39:49.801265   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.801283   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:49.801294   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:49.801309   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:49.848500   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:49.848542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:49.900084   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:49.900121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:49.916208   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:49.916234   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:49.983283   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:49.983310   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:49.983325   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:52.566884   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:52.580400   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:52.580476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:52.615922   80762 cri.go:89] found id: ""
	I0612 21:39:52.615957   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.615970   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:52.615978   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:52.616038   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:52.657316   80762 cri.go:89] found id: ""
	I0612 21:39:52.657348   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.657356   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:52.657362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:52.657417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:52.692426   80762 cri.go:89] found id: ""
	I0612 21:39:52.692459   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.692470   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:52.692478   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:52.692542   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:52.726800   80762 cri.go:89] found id: ""
	I0612 21:39:52.726835   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.726848   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:52.726856   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:52.726921   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:52.764283   80762 cri.go:89] found id: ""
	I0612 21:39:52.764314   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.764326   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:52.764341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:52.764395   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:52.802279   80762 cri.go:89] found id: ""
	I0612 21:39:52.802311   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.802324   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:52.802331   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:52.802385   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:52.841433   80762 cri.go:89] found id: ""
	I0612 21:39:52.841466   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.841477   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:52.841484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:52.841546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:50.667322   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.165294   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:51.016137   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.019373   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.818876   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.316845   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.881417   80762 cri.go:89] found id: ""
	I0612 21:39:52.881441   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.881449   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:52.881457   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:52.881468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:52.936228   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:52.936262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:52.950688   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:52.950718   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:53.025101   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:53.025122   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:53.025138   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:53.114986   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:53.115031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:55.653893   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:55.668983   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:55.669047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:55.708445   80762 cri.go:89] found id: ""
	I0612 21:39:55.708475   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.708486   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:55.708494   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:55.708558   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:55.745158   80762 cri.go:89] found id: ""
	I0612 21:39:55.745185   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.745195   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:55.745204   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:55.745270   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:55.785322   80762 cri.go:89] found id: ""
	I0612 21:39:55.785344   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.785363   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:55.785370   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:55.785442   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:55.822371   80762 cri.go:89] found id: ""
	I0612 21:39:55.822397   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.822408   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:55.822416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:55.822484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:55.856866   80762 cri.go:89] found id: ""
	I0612 21:39:55.856888   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.856895   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:55.856900   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:55.856954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:55.891618   80762 cri.go:89] found id: ""
	I0612 21:39:55.891648   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.891660   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:55.891668   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:55.891731   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:55.927483   80762 cri.go:89] found id: ""
	I0612 21:39:55.927504   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.927513   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:55.927519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:55.927572   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:55.963546   80762 cri.go:89] found id: ""
	I0612 21:39:55.963572   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.963584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:55.963597   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:55.963616   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:56.037421   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:56.037442   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:56.037453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:56.112148   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:56.112185   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:56.163359   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:56.163389   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:56.217109   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:56.217144   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:55.166499   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.665517   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.665625   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.513267   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.015558   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.317149   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.320306   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:01.815855   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.733278   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:58.746890   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:58.746951   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:58.785222   80762 cri.go:89] found id: ""
	I0612 21:39:58.785252   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.785263   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:58.785269   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:58.785343   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:58.824421   80762 cri.go:89] found id: ""
	I0612 21:39:58.824448   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.824455   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:58.824461   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:58.824521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:58.863626   80762 cri.go:89] found id: ""
	I0612 21:39:58.863658   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.863669   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:58.863728   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:58.863818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:58.904040   80762 cri.go:89] found id: ""
	I0612 21:39:58.904064   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.904073   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:58.904080   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:58.904147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:58.937508   80762 cri.go:89] found id: ""
	I0612 21:39:58.937543   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.937557   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:58.937565   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:58.937632   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:58.974283   80762 cri.go:89] found id: ""
	I0612 21:39:58.974311   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.974322   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:58.974330   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:58.974383   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:59.009954   80762 cri.go:89] found id: ""
	I0612 21:39:59.009987   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.009999   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:59.010007   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:59.010072   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:59.051911   80762 cri.go:89] found id: ""
	I0612 21:39:59.051935   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.051943   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:59.051951   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:59.051961   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:59.102911   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:59.102942   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:59.116576   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:59.116608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:59.189590   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:59.189619   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:59.189634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:59.270192   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:59.270232   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.820872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:01.834916   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:01.835000   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:01.870526   80762 cri.go:89] found id: ""
	I0612 21:40:01.870560   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.870572   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:01.870579   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:01.870642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:01.909581   80762 cri.go:89] found id: ""
	I0612 21:40:01.909614   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.909626   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:01.909633   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:01.909727   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:01.947944   80762 cri.go:89] found id: ""
	I0612 21:40:01.947976   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.947988   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:01.947995   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:01.948059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:01.985745   80762 cri.go:89] found id: ""
	I0612 21:40:01.985781   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.985793   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:01.985800   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:01.985860   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:02.023716   80762 cri.go:89] found id: ""
	I0612 21:40:02.023741   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.023749   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:02.023754   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:02.023801   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:02.059136   80762 cri.go:89] found id: ""
	I0612 21:40:02.059168   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.059203   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:02.059212   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:02.059283   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:02.104520   80762 cri.go:89] found id: ""
	I0612 21:40:02.104544   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.104552   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:02.104558   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:02.104618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:02.146130   80762 cri.go:89] found id: ""
	I0612 21:40:02.146164   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.146176   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:02.146187   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:02.146202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:02.199672   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:02.199710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:02.215224   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:02.215256   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:02.290030   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:02.290057   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:02.290072   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:02.374579   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:02.374615   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.667390   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.165253   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:00.512229   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:02.513298   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.018848   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:03.816610   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.818990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.915345   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:04.928323   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:04.928404   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:04.963267   80762 cri.go:89] found id: ""
	I0612 21:40:04.963297   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.963310   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:04.963319   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:04.963386   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:04.998378   80762 cri.go:89] found id: ""
	I0612 21:40:04.998409   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.998420   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:04.998426   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:04.998498   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:05.038094   80762 cri.go:89] found id: ""
	I0612 21:40:05.038118   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.038126   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:05.038132   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:05.038181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:05.074331   80762 cri.go:89] found id: ""
	I0612 21:40:05.074366   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.074379   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:05.074386   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:05.074462   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:05.109332   80762 cri.go:89] found id: ""
	I0612 21:40:05.109359   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.109368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:05.109373   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:05.109423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:05.143875   80762 cri.go:89] found id: ""
	I0612 21:40:05.143908   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.143918   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:05.143926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:05.143990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:05.183695   80762 cri.go:89] found id: ""
	I0612 21:40:05.183724   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.183731   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:05.183737   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:05.183792   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:05.222852   80762 cri.go:89] found id: ""
	I0612 21:40:05.222878   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.222887   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:05.222895   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:05.222907   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:05.262661   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:05.262687   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:05.315563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:05.315593   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:05.332128   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:05.332163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:05.411675   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:05.411699   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:05.411712   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:06.665324   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.667163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.512587   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.012843   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.316990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.816093   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.991930   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:08.005743   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:08.005807   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:08.041685   80762 cri.go:89] found id: ""
	I0612 21:40:08.041714   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.041724   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:08.041732   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:08.041791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:08.080875   80762 cri.go:89] found id: ""
	I0612 21:40:08.080905   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.080916   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:08.080925   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:08.080993   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:08.117290   80762 cri.go:89] found id: ""
	I0612 21:40:08.117316   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.117323   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:08.117329   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:08.117387   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:08.154345   80762 cri.go:89] found id: ""
	I0612 21:40:08.154376   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.154387   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:08.154395   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:08.154459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:08.192913   80762 cri.go:89] found id: ""
	I0612 21:40:08.192947   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.192957   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:08.192969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:08.193033   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:08.235732   80762 cri.go:89] found id: ""
	I0612 21:40:08.235764   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.235775   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:08.235782   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:08.235853   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:08.274282   80762 cri.go:89] found id: ""
	I0612 21:40:08.274306   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.274314   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:08.274320   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:08.274366   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:08.314585   80762 cri.go:89] found id: ""
	I0612 21:40:08.314608   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.314619   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:08.314628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:08.314641   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:08.331693   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:08.331725   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:08.414541   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:08.414565   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:08.414584   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:08.496428   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:08.496460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:08.546991   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:08.547020   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.099778   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:11.113450   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:11.113539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:11.150426   80762 cri.go:89] found id: ""
	I0612 21:40:11.150451   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.150459   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:11.150464   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:11.150524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:11.189931   80762 cri.go:89] found id: ""
	I0612 21:40:11.189958   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.189967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:11.189972   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:11.190031   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:11.228116   80762 cri.go:89] found id: ""
	I0612 21:40:11.228144   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.228154   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:11.228161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:11.228243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:11.268639   80762 cri.go:89] found id: ""
	I0612 21:40:11.268664   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.268672   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:11.268678   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:11.268723   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:11.306077   80762 cri.go:89] found id: ""
	I0612 21:40:11.306105   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.306116   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:11.306123   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:11.306187   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:11.344360   80762 cri.go:89] found id: ""
	I0612 21:40:11.344388   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.344399   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:11.344418   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:11.344475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:11.382906   80762 cri.go:89] found id: ""
	I0612 21:40:11.382937   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.382948   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:11.382957   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:11.383027   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:11.418388   80762 cri.go:89] found id: ""
	I0612 21:40:11.418419   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.418429   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:11.418439   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:11.418453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:11.432204   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:11.432241   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:11.508219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:11.508251   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:11.508263   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:11.593021   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:11.593058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:11.634056   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:11.634087   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.165384   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:13.170153   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.013303   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.013454   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.817129   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:15.316929   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.187831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:14.203153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:14.203248   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:14.239693   80762 cri.go:89] found id: ""
	I0612 21:40:14.239716   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.239723   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:14.239729   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:14.239827   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:14.273206   80762 cri.go:89] found id: ""
	I0612 21:40:14.273234   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.273244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:14.273251   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:14.273313   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:14.315512   80762 cri.go:89] found id: ""
	I0612 21:40:14.315592   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.315610   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:14.315618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:14.315679   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:14.352454   80762 cri.go:89] found id: ""
	I0612 21:40:14.352483   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.352496   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:14.352504   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:14.352554   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:14.387845   80762 cri.go:89] found id: ""
	I0612 21:40:14.387872   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.387880   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:14.387886   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:14.387935   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:14.423220   80762 cri.go:89] found id: ""
	I0612 21:40:14.423245   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.423254   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:14.423259   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:14.423322   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:14.457744   80762 cri.go:89] found id: ""
	I0612 21:40:14.457772   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.457784   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:14.457791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:14.457849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:14.493580   80762 cri.go:89] found id: ""
	I0612 21:40:14.493611   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.493622   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:14.493633   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:14.493669   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:14.566867   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:14.566894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:14.566913   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:14.645916   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:14.645959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:14.690232   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:14.690262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:14.741532   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:14.741576   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.257886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:17.271841   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:17.271910   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:17.309628   80762 cri.go:89] found id: ""
	I0612 21:40:17.309654   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.309667   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:17.309675   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:17.309746   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:17.346671   80762 cri.go:89] found id: ""
	I0612 21:40:17.346752   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.346769   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:17.346777   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:17.346842   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:17.381145   80762 cri.go:89] found id: ""
	I0612 21:40:17.381169   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.381177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:17.381184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:17.381241   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:17.417159   80762 cri.go:89] found id: ""
	I0612 21:40:17.417179   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.417187   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:17.417194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:17.417254   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:17.453189   80762 cri.go:89] found id: ""
	I0612 21:40:17.453213   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.453220   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:17.453226   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:17.453284   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:17.510988   80762 cri.go:89] found id: ""
	I0612 21:40:17.511012   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.511019   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:17.511026   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:17.511083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:17.548141   80762 cri.go:89] found id: ""
	I0612 21:40:17.548166   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.548176   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:17.548182   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:17.548243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:17.584591   80762 cri.go:89] found id: ""
	I0612 21:40:17.584619   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.584627   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:17.584637   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:17.584647   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:17.628627   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:17.628662   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:17.682792   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:17.682823   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.697921   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:17.697959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:17.770591   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:17.770617   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:17.770633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:15.665831   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.165059   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:16.014130   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:17.817443   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.316576   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.350181   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:20.363671   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:20.363743   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:20.399858   80762 cri.go:89] found id: ""
	I0612 21:40:20.399889   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.399896   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:20.399903   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:20.399963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:20.437715   80762 cri.go:89] found id: ""
	I0612 21:40:20.437755   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.437766   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:20.437776   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:20.437843   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:20.472525   80762 cri.go:89] found id: ""
	I0612 21:40:20.472558   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.472573   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:20.472582   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:20.472642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:20.507923   80762 cri.go:89] found id: ""
	I0612 21:40:20.507948   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.507959   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:20.507966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:20.508029   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:20.545471   80762 cri.go:89] found id: ""
	I0612 21:40:20.545502   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.545512   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:20.545519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:20.545586   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:20.583793   80762 cri.go:89] found id: ""
	I0612 21:40:20.583829   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.583839   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:20.583846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:20.583912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:20.624399   80762 cri.go:89] found id: ""
	I0612 21:40:20.624438   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.624449   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:20.624467   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:20.624530   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:20.665158   80762 cri.go:89] found id: ""
	I0612 21:40:20.665184   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.665194   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:20.665203   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:20.665217   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:20.743062   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:20.743101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:20.792573   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:20.792613   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:20.847998   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:20.848033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:20.863447   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:20.863497   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:20.938020   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:20.165455   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.665110   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.665262   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.513556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.014750   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.316950   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.815377   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:26.817066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.438289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:23.453792   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:23.453855   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:23.494044   80762 cri.go:89] found id: ""
	I0612 21:40:23.494070   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.494077   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:23.494083   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:23.494144   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:23.533278   80762 cri.go:89] found id: ""
	I0612 21:40:23.533305   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.533313   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:23.533319   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:23.533380   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:23.568504   80762 cri.go:89] found id: ""
	I0612 21:40:23.568538   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.568549   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:23.568556   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:23.568619   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:23.610596   80762 cri.go:89] found id: ""
	I0612 21:40:23.610624   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.610633   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:23.610638   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:23.610690   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:23.651856   80762 cri.go:89] found id: ""
	I0612 21:40:23.651886   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.651896   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:23.651903   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:23.651978   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:23.690989   80762 cri.go:89] found id: ""
	I0612 21:40:23.691020   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.691030   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:23.691036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:23.691089   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:23.730417   80762 cri.go:89] found id: ""
	I0612 21:40:23.730454   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.730467   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:23.730476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:23.730538   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:23.773887   80762 cri.go:89] found id: ""
	I0612 21:40:23.773913   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.773921   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:23.773932   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:23.773947   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:23.825771   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:23.825805   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:23.840136   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:23.840163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:23.933645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:23.933670   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:23.933686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:24.020205   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:24.020243   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.566746   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:26.579557   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:26.579612   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:26.614721   80762 cri.go:89] found id: ""
	I0612 21:40:26.614749   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.614757   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:26.614763   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:26.614815   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:26.651398   80762 cri.go:89] found id: ""
	I0612 21:40:26.651427   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.651437   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:26.651445   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:26.651506   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:26.688217   80762 cri.go:89] found id: ""
	I0612 21:40:26.688249   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.688261   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:26.688268   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:26.688333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:26.721316   80762 cri.go:89] found id: ""
	I0612 21:40:26.721346   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.721357   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:26.721364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:26.721424   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:26.758842   80762 cri.go:89] found id: ""
	I0612 21:40:26.758868   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.758878   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:26.758885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:26.758957   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:26.795696   80762 cri.go:89] found id: ""
	I0612 21:40:26.795725   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.795733   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:26.795738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:26.795788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:26.834903   80762 cri.go:89] found id: ""
	I0612 21:40:26.834932   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.834941   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:26.834947   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:26.835020   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:26.872751   80762 cri.go:89] found id: ""
	I0612 21:40:26.872788   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.872796   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:26.872805   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:26.872817   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:26.952401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:26.952440   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.990548   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:26.990583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:27.042973   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:27.043029   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:27.058348   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:27.058379   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:27.133047   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:26.666430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.165063   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:25.513982   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:28.012556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:30.017664   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.315668   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:31.316817   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.634105   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:29.654113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:29.654171   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:29.700138   80762 cri.go:89] found id: ""
	I0612 21:40:29.700169   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.700179   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:29.700188   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:29.700260   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:29.751599   80762 cri.go:89] found id: ""
	I0612 21:40:29.751628   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.751638   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:29.751646   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:29.751699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:29.801971   80762 cri.go:89] found id: ""
	I0612 21:40:29.801995   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.802003   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:29.802008   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:29.802059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:29.839381   80762 cri.go:89] found id: ""
	I0612 21:40:29.839407   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.839418   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:29.839426   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:29.839484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:29.876634   80762 cri.go:89] found id: ""
	I0612 21:40:29.876661   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.876668   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:29.876675   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:29.876721   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:29.909673   80762 cri.go:89] found id: ""
	I0612 21:40:29.909707   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.909718   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:29.909726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:29.909791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:29.947984   80762 cri.go:89] found id: ""
	I0612 21:40:29.948019   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.948029   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:29.948037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:29.948099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:29.988611   80762 cri.go:89] found id: ""
	I0612 21:40:29.988639   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.988650   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:29.988660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:29.988675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:30.073180   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:30.073216   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:30.114703   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:30.114732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:30.173242   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:30.173278   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:30.189081   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:30.189112   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:30.263564   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:32.763967   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:32.776738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:32.776808   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:32.813088   80762 cri.go:89] found id: ""
	I0612 21:40:32.813115   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.813125   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:32.813132   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:32.813195   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:32.850960   80762 cri.go:89] found id: ""
	I0612 21:40:32.850987   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.850996   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:32.851004   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:32.851065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:31.166578   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.669302   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.512480   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:34.512817   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.815867   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:35.817105   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.887229   80762 cri.go:89] found id: ""
	I0612 21:40:32.887259   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.887270   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:32.887277   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:32.887346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:32.923123   80762 cri.go:89] found id: ""
	I0612 21:40:32.923148   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.923158   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:32.923164   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:32.923242   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:32.962603   80762 cri.go:89] found id: ""
	I0612 21:40:32.962628   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.962638   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:32.962644   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:32.962695   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:32.998971   80762 cri.go:89] found id: ""
	I0612 21:40:32.999025   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.999037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:32.999046   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:32.999120   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:33.037640   80762 cri.go:89] found id: ""
	I0612 21:40:33.037670   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.037680   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:33.037686   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:33.037748   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:33.073758   80762 cri.go:89] found id: ""
	I0612 21:40:33.073787   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.073794   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:33.073804   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:33.073815   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:33.124478   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:33.124512   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:33.139010   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:33.139036   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:33.207693   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:33.207716   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:33.207732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:33.287710   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:33.287746   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:35.831654   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:35.845783   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:35.845845   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:35.882097   80762 cri.go:89] found id: ""
	I0612 21:40:35.882129   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.882141   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:35.882149   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:35.882205   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:35.920931   80762 cri.go:89] found id: ""
	I0612 21:40:35.920972   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.920980   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:35.920985   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:35.921061   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:35.958689   80762 cri.go:89] found id: ""
	I0612 21:40:35.958712   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.958721   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:35.958726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:35.958774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:35.994973   80762 cri.go:89] found id: ""
	I0612 21:40:35.995028   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.995040   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:35.995048   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:35.995114   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:36.035679   80762 cri.go:89] found id: ""
	I0612 21:40:36.035707   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.035715   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:36.035721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:36.035768   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:36.071498   80762 cri.go:89] found id: ""
	I0612 21:40:36.071525   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.071534   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:36.071544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:36.071594   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:36.107367   80762 cri.go:89] found id: ""
	I0612 21:40:36.107397   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.107406   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:36.107413   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:36.107466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:36.148668   80762 cri.go:89] found id: ""
	I0612 21:40:36.148699   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.148710   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:36.148721   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:36.148736   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:36.207719   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:36.207765   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:36.223129   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:36.223158   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:36.290786   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:36.290809   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:36.290822   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:36.375361   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:36.375398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:36.165430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.165989   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:37.015936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:39.513497   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.318886   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:40.815802   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.921100   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:38.935420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:38.935491   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:38.970519   80762 cri.go:89] found id: ""
	I0612 21:40:38.970548   80762 logs.go:276] 0 containers: []
	W0612 21:40:38.970559   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:38.970567   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:38.970639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:39.005866   80762 cri.go:89] found id: ""
	I0612 21:40:39.005888   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.005896   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:39.005902   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:39.005954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:39.043619   80762 cri.go:89] found id: ""
	I0612 21:40:39.043647   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.043655   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:39.043661   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:39.043709   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:39.081311   80762 cri.go:89] found id: ""
	I0612 21:40:39.081336   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.081344   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:39.081350   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:39.081410   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:39.117326   80762 cri.go:89] found id: ""
	I0612 21:40:39.117358   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.117367   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:39.117372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:39.117423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:39.151785   80762 cri.go:89] found id: ""
	I0612 21:40:39.151819   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.151828   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:39.151835   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:39.151899   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:39.187031   80762 cri.go:89] found id: ""
	I0612 21:40:39.187057   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.187065   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:39.187071   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:39.187119   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:39.222186   80762 cri.go:89] found id: ""
	I0612 21:40:39.222212   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.222223   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:39.222233   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:39.222245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:39.276126   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:39.276164   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:39.291631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:39.291658   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:39.365615   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:39.365641   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:39.365659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:39.442548   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:39.442600   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:41.980840   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:41.996629   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:41.996686   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:42.034158   80762 cri.go:89] found id: ""
	I0612 21:40:42.034186   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.034195   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:42.034202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:42.034274   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:42.070981   80762 cri.go:89] found id: ""
	I0612 21:40:42.071011   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.071021   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:42.071028   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:42.071093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:42.108282   80762 cri.go:89] found id: ""
	I0612 21:40:42.108309   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.108316   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:42.108322   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:42.108369   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:42.146394   80762 cri.go:89] found id: ""
	I0612 21:40:42.146423   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.146434   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:42.146454   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:42.146539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:42.183577   80762 cri.go:89] found id: ""
	I0612 21:40:42.183601   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.183608   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:42.183614   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:42.183662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:42.222069   80762 cri.go:89] found id: ""
	I0612 21:40:42.222100   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.222109   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:42.222115   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:42.222168   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:42.259128   80762 cri.go:89] found id: ""
	I0612 21:40:42.259155   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.259164   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:42.259192   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:42.259282   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:42.296321   80762 cri.go:89] found id: ""
	I0612 21:40:42.296354   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.296368   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:42.296380   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:42.296400   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:42.311098   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:42.311137   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:42.386116   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:42.386144   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:42.386163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:42.467016   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:42.467054   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:42.509143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:42.509180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:40.166288   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.664817   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.665596   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.017043   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.513368   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.816702   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.316890   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.062872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:45.076570   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:45.076658   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:45.114362   80762 cri.go:89] found id: ""
	I0612 21:40:45.114394   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.114404   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:45.114412   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:45.114478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:45.151577   80762 cri.go:89] found id: ""
	I0612 21:40:45.151609   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.151620   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:45.151627   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:45.151689   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:45.188753   80762 cri.go:89] found id: ""
	I0612 21:40:45.188785   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.188795   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:45.188802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:45.188861   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:45.224775   80762 cri.go:89] found id: ""
	I0612 21:40:45.224801   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.224808   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:45.224814   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:45.224873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:45.260440   80762 cri.go:89] found id: ""
	I0612 21:40:45.260472   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.260483   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:45.260490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:45.260547   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:45.297662   80762 cri.go:89] found id: ""
	I0612 21:40:45.297697   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.297709   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:45.297716   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:45.297774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:45.335637   80762 cri.go:89] found id: ""
	I0612 21:40:45.335669   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.335682   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:45.335690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:45.335753   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:45.371523   80762 cri.go:89] found id: ""
	I0612 21:40:45.371580   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.371590   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:45.371599   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:45.371610   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:45.424029   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:45.424065   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:45.440339   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:45.440378   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:45.509504   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:45.509526   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:45.509541   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:45.591857   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:45.591893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:47.166437   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.665544   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.016561   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.511894   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.320090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.816816   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:48.135912   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:48.151271   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:48.151331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:48.192740   80762 cri.go:89] found id: ""
	I0612 21:40:48.192775   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.192788   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:48.192798   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:48.192875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:48.230440   80762 cri.go:89] found id: ""
	I0612 21:40:48.230469   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.230479   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:48.230487   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:48.230549   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:48.270892   80762 cri.go:89] found id: ""
	I0612 21:40:48.270922   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.270933   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:48.270941   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:48.270996   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:48.308555   80762 cri.go:89] found id: ""
	I0612 21:40:48.308580   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.308588   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:48.308594   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:48.308640   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:48.342705   80762 cri.go:89] found id: ""
	I0612 21:40:48.342727   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.342735   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:48.342741   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:48.342788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:48.377418   80762 cri.go:89] found id: ""
	I0612 21:40:48.377450   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.377461   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:48.377468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:48.377535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:48.413092   80762 cri.go:89] found id: ""
	I0612 21:40:48.413126   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.413141   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:48.413149   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:48.413215   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:48.447673   80762 cri.go:89] found id: ""
	I0612 21:40:48.447699   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.447708   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:48.447716   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:48.447728   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:48.488508   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:48.488542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:48.540573   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:48.540608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:48.554735   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:48.554762   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:48.632074   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:48.632098   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:48.632117   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.212336   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:51.227428   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:51.227493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:51.268124   80762 cri.go:89] found id: ""
	I0612 21:40:51.268157   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.268167   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:51.268172   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:51.268220   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:51.305751   80762 cri.go:89] found id: ""
	I0612 21:40:51.305777   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.305785   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:51.305793   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:51.305849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:51.347292   80762 cri.go:89] found id: ""
	I0612 21:40:51.347318   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.347325   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:51.347332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:51.347394   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:51.387476   80762 cri.go:89] found id: ""
	I0612 21:40:51.387501   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.387509   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:51.387515   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:51.387573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:51.431992   80762 cri.go:89] found id: ""
	I0612 21:40:51.432019   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.432029   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:51.432036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:51.432096   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:51.477204   80762 cri.go:89] found id: ""
	I0612 21:40:51.477235   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.477246   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:51.477254   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:51.477346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:51.518449   80762 cri.go:89] found id: ""
	I0612 21:40:51.518477   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.518488   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:51.518502   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:51.518562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:51.554991   80762 cri.go:89] found id: ""
	I0612 21:40:51.555015   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.555024   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:51.555033   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:51.555046   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:51.606732   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:51.606769   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:51.620512   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:51.620538   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:51.697029   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:51.697058   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:51.697074   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.775401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:51.775437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:51.666561   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.166247   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:51.512909   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.012887   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:52.315904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.316764   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.816819   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.318059   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:54.331420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:54.331509   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:54.367886   80762 cri.go:89] found id: ""
	I0612 21:40:54.367926   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.367948   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:54.367959   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:54.368047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:54.403998   80762 cri.go:89] found id: ""
	I0612 21:40:54.404023   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.404034   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:54.404041   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:54.404108   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:54.441449   80762 cri.go:89] found id: ""
	I0612 21:40:54.441480   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.441491   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:54.441498   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:54.441557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:54.476459   80762 cri.go:89] found id: ""
	I0612 21:40:54.476490   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.476500   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:54.476508   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:54.476573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:54.515337   80762 cri.go:89] found id: ""
	I0612 21:40:54.515360   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.515368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:54.515374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:54.515423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:54.551447   80762 cri.go:89] found id: ""
	I0612 21:40:54.551468   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.551475   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:54.551481   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:54.551528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:54.587082   80762 cri.go:89] found id: ""
	I0612 21:40:54.587114   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.587125   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:54.587145   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:54.587225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:54.624211   80762 cri.go:89] found id: ""
	I0612 21:40:54.624235   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.624257   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:54.624268   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:54.624282   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:54.677816   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:54.677848   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:54.693725   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:54.693749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:54.772229   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:54.772255   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:54.772273   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:54.852543   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:54.852578   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:57.397722   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:57.411082   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:57.411145   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:57.449633   80762 cri.go:89] found id: ""
	I0612 21:40:57.449662   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.449673   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:57.449680   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:57.449745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:57.489855   80762 cri.go:89] found id: ""
	I0612 21:40:57.489880   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.489889   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:57.489894   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:57.489952   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:57.528986   80762 cri.go:89] found id: ""
	I0612 21:40:57.529006   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.529014   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:57.529019   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:57.529081   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:57.566701   80762 cri.go:89] found id: ""
	I0612 21:40:57.566730   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.566739   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:57.566746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:57.566800   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:57.601114   80762 cri.go:89] found id: ""
	I0612 21:40:57.601137   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.601145   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:57.601151   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:57.601212   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:57.636120   80762 cri.go:89] found id: ""
	I0612 21:40:57.636145   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.636155   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:57.636163   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:57.636225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:57.676912   80762 cri.go:89] found id: ""
	I0612 21:40:57.676953   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.676960   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:57.676966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:57.677039   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:57.714671   80762 cri.go:89] found id: ""
	I0612 21:40:57.714691   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.714699   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:57.714707   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:57.714720   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:57.770550   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:57.770583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:57.785062   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:57.785093   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:57.853448   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:57.853468   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:57.853480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:56.167768   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.665108   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.014274   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.014535   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.816961   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.817450   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:57.939957   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:57.939999   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.493469   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:00.509746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:00.509819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:00.546582   80762 cri.go:89] found id: ""
	I0612 21:41:00.546610   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.546620   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:00.546629   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:00.546683   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:00.584229   80762 cri.go:89] found id: ""
	I0612 21:41:00.584256   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.584264   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:00.584269   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:00.584337   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:00.618679   80762 cri.go:89] found id: ""
	I0612 21:41:00.618704   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.618712   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:00.618719   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:00.618778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:00.656336   80762 cri.go:89] found id: ""
	I0612 21:41:00.656364   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.656375   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:00.656384   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:00.656457   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:00.694147   80762 cri.go:89] found id: ""
	I0612 21:41:00.694173   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.694182   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:00.694187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:00.694236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:00.733964   80762 cri.go:89] found id: ""
	I0612 21:41:00.733994   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.734006   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:00.734014   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:00.734076   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:00.771245   80762 cri.go:89] found id: ""
	I0612 21:41:00.771274   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.771287   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:00.771293   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:00.771357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:00.809118   80762 cri.go:89] found id: ""
	I0612 21:41:00.809150   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.809158   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:00.809168   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:00.809188   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:00.863479   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:00.863514   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:00.878749   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:00.878783   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:00.955800   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:00.955825   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:00.955844   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:01.037587   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:01.037618   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.666373   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.165203   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.513805   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.017922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.317115   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.817907   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.583063   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:03.597656   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:03.597732   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:03.633233   80762 cri.go:89] found id: ""
	I0612 21:41:03.633263   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.633283   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:03.633291   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:03.633357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:03.679900   80762 cri.go:89] found id: ""
	I0612 21:41:03.679930   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.679941   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:03.679948   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:03.680018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:03.718766   80762 cri.go:89] found id: ""
	I0612 21:41:03.718792   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.718800   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:03.718811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:03.718868   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:03.759404   80762 cri.go:89] found id: ""
	I0612 21:41:03.759429   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.759437   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:03.759443   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:03.759496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:03.794313   80762 cri.go:89] found id: ""
	I0612 21:41:03.794348   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.794357   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:03.794364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:03.794430   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:03.832525   80762 cri.go:89] found id: ""
	I0612 21:41:03.832546   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.832554   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:03.832559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:03.832607   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:03.872841   80762 cri.go:89] found id: ""
	I0612 21:41:03.872868   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.872878   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:03.872885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:03.872945   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:03.912736   80762 cri.go:89] found id: ""
	I0612 21:41:03.912760   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.912770   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:03.912781   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:03.912794   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:03.986645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:03.986672   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:03.986688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:04.066766   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:04.066799   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:04.108219   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:04.108250   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:04.168866   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:04.168911   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:06.684232   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:06.698359   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:06.698443   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:06.735324   80762 cri.go:89] found id: ""
	I0612 21:41:06.735350   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.735359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:06.735364   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:06.735418   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:06.771763   80762 cri.go:89] found id: ""
	I0612 21:41:06.771786   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.771794   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:06.771799   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:06.771850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:06.808151   80762 cri.go:89] found id: ""
	I0612 21:41:06.808179   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.808188   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:06.808193   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:06.808263   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:06.846099   80762 cri.go:89] found id: ""
	I0612 21:41:06.846121   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.846129   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:06.846134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:06.846182   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:06.883559   80762 cri.go:89] found id: ""
	I0612 21:41:06.883584   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.883591   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:06.883597   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:06.883645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:06.920799   80762 cri.go:89] found id: ""
	I0612 21:41:06.920830   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.920841   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:06.920849   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:06.920914   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:06.964441   80762 cri.go:89] found id: ""
	I0612 21:41:06.964472   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.964482   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:06.964490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:06.964561   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:07.000866   80762 cri.go:89] found id: ""
	I0612 21:41:07.000901   80762 logs.go:276] 0 containers: []
	W0612 21:41:07.000912   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:07.000924   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:07.000993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:07.017074   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:07.017121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:07.093873   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:07.093901   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:07.093925   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:07.171258   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:07.171293   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:07.212588   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:07.212624   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:05.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.665354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.665558   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.512109   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.512615   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.513483   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:08.316327   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:10.316456   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.767332   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:09.781184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:09.781249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:09.818966   80762 cri.go:89] found id: ""
	I0612 21:41:09.818999   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.819008   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:09.819014   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:09.819064   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:09.854714   80762 cri.go:89] found id: ""
	I0612 21:41:09.854742   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.854760   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:09.854772   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:09.854823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:09.891229   80762 cri.go:89] found id: ""
	I0612 21:41:09.891257   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.891268   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:09.891274   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:09.891335   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:09.928569   80762 cri.go:89] found id: ""
	I0612 21:41:09.928598   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.928610   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:09.928617   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:09.928680   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:09.963681   80762 cri.go:89] found id: ""
	I0612 21:41:09.963714   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.963725   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:09.963733   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:09.963819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:10.002340   80762 cri.go:89] found id: ""
	I0612 21:41:10.002368   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.002380   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:10.002388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:10.002454   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:10.041935   80762 cri.go:89] found id: ""
	I0612 21:41:10.041961   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.041972   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:10.041979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:10.042047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:10.080541   80762 cri.go:89] found id: ""
	I0612 21:41:10.080571   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.080584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:10.080598   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:10.080614   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:10.140904   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:10.140944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:10.176646   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:10.176688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:10.272147   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:10.272169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:10.272183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:10.352649   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:10.352686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:12.166618   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.665896   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.013177   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.013716   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.317177   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.317391   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.815940   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.896274   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:12.911147   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:12.911231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:12.947628   80762 cri.go:89] found id: ""
	I0612 21:41:12.947651   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.947660   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:12.947665   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:12.947726   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:12.982813   80762 cri.go:89] found id: ""
	I0612 21:41:12.982837   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.982845   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:12.982851   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:12.982898   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:13.021360   80762 cri.go:89] found id: ""
	I0612 21:41:13.021403   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.021412   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:13.021417   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:13.021468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:13.063534   80762 cri.go:89] found id: ""
	I0612 21:41:13.063566   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.063576   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:13.063585   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:13.063666   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:13.098767   80762 cri.go:89] found id: ""
	I0612 21:41:13.098796   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.098807   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:13.098816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:13.098878   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:13.140764   80762 cri.go:89] found id: ""
	I0612 21:41:13.140797   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.140809   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:13.140816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:13.140882   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:13.180356   80762 cri.go:89] found id: ""
	I0612 21:41:13.180400   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.180413   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:13.180420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:13.180482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:13.215198   80762 cri.go:89] found id: ""
	I0612 21:41:13.215227   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.215238   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:13.215249   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:13.215265   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:13.273143   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:13.273182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:13.287861   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:13.287893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:13.366052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:13.366073   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:13.366099   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:13.450980   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:13.451015   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:15.991386   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:16.005618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:16.005699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:16.047253   80762 cri.go:89] found id: ""
	I0612 21:41:16.047281   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.047289   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:16.047295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:16.047356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:16.082860   80762 cri.go:89] found id: ""
	I0612 21:41:16.082886   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.082894   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:16.082899   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:16.082948   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:16.123127   80762 cri.go:89] found id: ""
	I0612 21:41:16.123152   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.123164   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:16.123187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:16.123247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:16.167155   80762 cri.go:89] found id: ""
	I0612 21:41:16.167189   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.167199   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:16.167207   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:16.167276   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:16.204036   80762 cri.go:89] found id: ""
	I0612 21:41:16.204061   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.204071   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:16.204079   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:16.204140   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:16.246672   80762 cri.go:89] found id: ""
	I0612 21:41:16.246701   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.246712   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:16.246721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:16.246785   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:16.286820   80762 cri.go:89] found id: ""
	I0612 21:41:16.286849   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.286857   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:16.286864   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:16.286919   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:16.326622   80762 cri.go:89] found id: ""
	I0612 21:41:16.326649   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.326660   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:16.326667   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:16.326678   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:16.407492   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:16.407525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:16.448207   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:16.448236   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:16.501675   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:16.501714   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:16.518173   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:16.518202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:16.592582   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:17.166163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.167204   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.514405   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.016197   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:18.816596   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:20.817504   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.093054   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:19.107926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:19.108002   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:19.149386   80762 cri.go:89] found id: ""
	I0612 21:41:19.149411   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.149421   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:19.149429   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:19.149493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:19.188092   80762 cri.go:89] found id: ""
	I0612 21:41:19.188120   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.188131   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:19.188139   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:19.188201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:19.227203   80762 cri.go:89] found id: ""
	I0612 21:41:19.227229   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.227235   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:19.227242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:19.227301   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:19.269187   80762 cri.go:89] found id: ""
	I0612 21:41:19.269217   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.269225   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:19.269232   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:19.269294   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:19.305394   80762 cri.go:89] found id: ""
	I0612 21:41:19.305418   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.305425   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:19.305431   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:19.305480   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:19.347794   80762 cri.go:89] found id: ""
	I0612 21:41:19.347825   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.347837   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:19.347846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:19.347907   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:19.384314   80762 cri.go:89] found id: ""
	I0612 21:41:19.384346   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.384364   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:19.384372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:19.384428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:19.421782   80762 cri.go:89] found id: ""
	I0612 21:41:19.421811   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.421822   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:19.421834   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:19.421849   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:19.475969   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:19.476000   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:19.490683   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:19.490710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:19.574492   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:19.574513   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:19.574525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:19.662288   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:19.662324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:22.205404   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:22.220217   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:22.220297   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:22.256870   80762 cri.go:89] found id: ""
	I0612 21:41:22.256901   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.256913   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:22.256921   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:22.256984   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:22.290380   80762 cri.go:89] found id: ""
	I0612 21:41:22.290413   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.290425   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:22.290433   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:22.290497   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:22.324981   80762 cri.go:89] found id: ""
	I0612 21:41:22.325010   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.325019   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:22.325024   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:22.325093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:22.362900   80762 cri.go:89] found id: ""
	I0612 21:41:22.362926   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.362938   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:22.362946   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:22.363008   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:22.399004   80762 cri.go:89] found id: ""
	I0612 21:41:22.399037   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.399048   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:22.399056   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:22.399116   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:22.434306   80762 cri.go:89] found id: ""
	I0612 21:41:22.434341   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.434355   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:22.434365   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:22.434422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:22.479085   80762 cri.go:89] found id: ""
	I0612 21:41:22.479116   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.479129   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:22.479142   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:22.479228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:22.516730   80762 cri.go:89] found id: ""
	I0612 21:41:22.516755   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.516761   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:22.516769   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:22.516780   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:22.570921   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:22.570957   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:22.585409   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:22.585437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:22.667314   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:22.667342   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:22.667360   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:22.743865   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:22.743901   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:21.170060   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.666364   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:21.021658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.512541   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.316232   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.816641   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.282306   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:25.297334   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:25.297407   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:25.336610   80762 cri.go:89] found id: ""
	I0612 21:41:25.336641   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.336654   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:25.336662   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:25.336729   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:25.373307   80762 cri.go:89] found id: ""
	I0612 21:41:25.373338   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.373350   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:25.373358   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:25.373425   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:25.413141   80762 cri.go:89] found id: ""
	I0612 21:41:25.413169   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.413177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:25.413183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:25.413233   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:25.450810   80762 cri.go:89] found id: ""
	I0612 21:41:25.450842   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.450853   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:25.450862   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:25.450924   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:25.487017   80762 cri.go:89] found id: ""
	I0612 21:41:25.487049   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.487255   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:25.487269   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:25.487328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:25.524335   80762 cri.go:89] found id: ""
	I0612 21:41:25.524361   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.524371   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:25.524377   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:25.524428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:25.560394   80762 cri.go:89] found id: ""
	I0612 21:41:25.560421   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.560429   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:25.560435   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:25.560482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:25.599334   80762 cri.go:89] found id: ""
	I0612 21:41:25.599362   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.599372   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:25.599384   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:25.599399   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:25.680153   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:25.680195   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:25.726336   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:25.726377   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:25.777064   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:25.777098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:25.791978   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:25.792007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:25.868860   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:25.667028   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.164920   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.514249   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.012042   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.013658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.316543   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.816789   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.369099   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:28.382729   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:28.382786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:28.423835   80762 cri.go:89] found id: ""
	I0612 21:41:28.423865   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.423875   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:28.423889   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:28.423953   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:28.463098   80762 cri.go:89] found id: ""
	I0612 21:41:28.463127   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.463137   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:28.463144   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:28.463223   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:28.499678   80762 cri.go:89] found id: ""
	I0612 21:41:28.499707   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.499718   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:28.499726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:28.499786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:28.536057   80762 cri.go:89] found id: ""
	I0612 21:41:28.536089   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.536101   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:28.536108   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:28.536180   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:28.571052   80762 cri.go:89] found id: ""
	I0612 21:41:28.571080   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.571090   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:28.571098   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:28.571162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:28.607320   80762 cri.go:89] found id: ""
	I0612 21:41:28.607348   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.607360   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:28.607368   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:28.607427   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:28.643000   80762 cri.go:89] found id: ""
	I0612 21:41:28.643029   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.643037   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:28.643042   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:28.643113   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:28.684134   80762 cri.go:89] found id: ""
	I0612 21:41:28.684164   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.684175   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:28.684186   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:28.684201   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:28.737059   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:28.737092   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:28.753290   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:28.753320   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:28.826964   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:28.826990   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:28.827009   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:28.908874   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:28.908919   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:31.450362   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:31.465831   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:31.465912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:31.507441   80762 cri.go:89] found id: ""
	I0612 21:41:31.507465   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.507474   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:31.507482   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:31.507546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:31.541403   80762 cri.go:89] found id: ""
	I0612 21:41:31.541437   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.541450   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:31.541458   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:31.541524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:31.576367   80762 cri.go:89] found id: ""
	I0612 21:41:31.576393   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.576405   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:31.576412   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:31.576475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:31.615053   80762 cri.go:89] found id: ""
	I0612 21:41:31.615081   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.615091   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:31.615099   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:31.615159   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:31.650460   80762 cri.go:89] found id: ""
	I0612 21:41:31.650495   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.650504   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:31.650511   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:31.650580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:31.690764   80762 cri.go:89] found id: ""
	I0612 21:41:31.690792   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.690803   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:31.690810   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:31.690870   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:31.729785   80762 cri.go:89] found id: ""
	I0612 21:41:31.729809   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.729817   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:31.729822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:31.729881   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:31.772978   80762 cri.go:89] found id: ""
	I0612 21:41:31.773005   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.773013   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:31.773023   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:31.773038   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:31.830451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:31.830484   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:31.846821   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:31.846850   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:31.927289   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:31.927328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:31.927358   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:32.004814   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:32.004852   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:30.165423   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.165695   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.664959   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.515104   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:33.316674   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:35.816686   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.550931   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:34.567559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:34.567618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:34.602234   80762 cri.go:89] found id: ""
	I0612 21:41:34.602260   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.602267   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:34.602273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:34.602318   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:34.639575   80762 cri.go:89] found id: ""
	I0612 21:41:34.639598   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.639605   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:34.639610   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:34.639656   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:34.681325   80762 cri.go:89] found id: ""
	I0612 21:41:34.681360   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.681368   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:34.681374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:34.681478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:34.721405   80762 cri.go:89] found id: ""
	I0612 21:41:34.721432   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.721444   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:34.721451   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:34.721517   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:34.764344   80762 cri.go:89] found id: ""
	I0612 21:41:34.764375   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.764386   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:34.764394   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:34.764459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:34.802083   80762 cri.go:89] found id: ""
	I0612 21:41:34.802107   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.802115   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:34.802121   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:34.802181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:34.843418   80762 cri.go:89] found id: ""
	I0612 21:41:34.843441   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.843450   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:34.843455   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:34.843501   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:34.877803   80762 cri.go:89] found id: ""
	I0612 21:41:34.877832   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.877842   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:34.877852   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:34.877867   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:34.930515   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:34.930545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:34.943705   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:34.943729   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:35.024912   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:35.024931   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:35.024941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:35.109129   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:35.109165   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:37.651788   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:37.667901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:37.667964   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:37.709599   80762 cri.go:89] found id: ""
	I0612 21:41:37.709627   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.709637   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:37.709645   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:37.709700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:37.747150   80762 cri.go:89] found id: ""
	I0612 21:41:37.747191   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.747204   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:37.747212   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:37.747273   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:37.785528   80762 cri.go:89] found id: ""
	I0612 21:41:37.785552   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.785560   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:37.785567   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:37.785614   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:37.822363   80762 cri.go:89] found id: ""
	I0612 21:41:37.822390   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.822400   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:37.822408   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:37.822468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:36.666054   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.165222   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.012397   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.012503   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:38.317132   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:40.821114   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.858285   80762 cri.go:89] found id: ""
	I0612 21:41:37.858395   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.858409   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:37.858416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:37.858466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:37.897500   80762 cri.go:89] found id: ""
	I0612 21:41:37.897542   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.897556   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:37.897574   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:37.897635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:37.937878   80762 cri.go:89] found id: ""
	I0612 21:41:37.937905   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.937921   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:37.937927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:37.937985   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:37.978282   80762 cri.go:89] found id: ""
	I0612 21:41:37.978310   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.978319   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:37.978327   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:37.978341   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:38.055864   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:38.055890   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:38.055903   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:38.135883   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:38.135918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:38.178641   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:38.178668   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:38.236635   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:38.236686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:40.759426   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:40.773526   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:40.773598   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:40.819130   80762 cri.go:89] found id: ""
	I0612 21:41:40.819161   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.819190   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:40.819202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:40.819264   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:40.856176   80762 cri.go:89] found id: ""
	I0612 21:41:40.856204   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.856216   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:40.856224   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:40.856287   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:40.896709   80762 cri.go:89] found id: ""
	I0612 21:41:40.896739   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.896750   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:40.896759   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:40.896820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:40.936431   80762 cri.go:89] found id: ""
	I0612 21:41:40.936457   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.936465   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:40.936471   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:40.936528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:40.979773   80762 cri.go:89] found id: ""
	I0612 21:41:40.979809   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.979820   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:40.979828   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:40.979892   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:41.023885   80762 cri.go:89] found id: ""
	I0612 21:41:41.023910   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.023919   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:41.023925   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:41.024004   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:41.070370   80762 cri.go:89] found id: ""
	I0612 21:41:41.070396   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.070405   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:41.070411   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:41.070467   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:41.115282   80762 cri.go:89] found id: ""
	I0612 21:41:41.115311   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.115321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:41.115332   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:41.115346   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:41.128680   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:41.128710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:41.206100   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:41.206125   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:41.206140   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:41.283499   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:41.283536   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:41.323275   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:41.323307   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:41.166258   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.666600   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:41.013379   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.316659   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.816066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.875750   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:43.890156   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:43.890216   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:43.935105   80762 cri.go:89] found id: ""
	I0612 21:41:43.935135   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.935147   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:43.935155   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:43.935236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:43.980929   80762 cri.go:89] found id: ""
	I0612 21:41:43.980958   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.980967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:43.980973   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:43.981051   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:44.029387   80762 cri.go:89] found id: ""
	I0612 21:41:44.029409   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.029416   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:44.029421   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:44.029483   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:44.067415   80762 cri.go:89] found id: ""
	I0612 21:41:44.067449   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.067460   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:44.067468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:44.067528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:44.105093   80762 cri.go:89] found id: ""
	I0612 21:41:44.105117   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.105125   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:44.105131   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:44.105178   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:44.142647   80762 cri.go:89] found id: ""
	I0612 21:41:44.142680   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.142691   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:44.142699   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:44.142759   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:44.182725   80762 cri.go:89] found id: ""
	I0612 21:41:44.182756   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.182767   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:44.182775   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:44.182836   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:44.219538   80762 cri.go:89] found id: ""
	I0612 21:41:44.219568   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.219580   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:44.219593   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:44.219608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:44.272234   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:44.272267   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:44.285631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:44.285663   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:44.362453   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:44.362470   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:44.362482   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:44.444624   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:44.444656   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.985731   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:46.999749   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:46.999819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:47.035051   80762 cri.go:89] found id: ""
	I0612 21:41:47.035073   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.035080   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:47.035086   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:47.035136   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:47.077929   80762 cri.go:89] found id: ""
	I0612 21:41:47.077964   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.077975   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:47.077982   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:47.078062   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:47.111621   80762 cri.go:89] found id: ""
	I0612 21:41:47.111660   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.111671   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:47.111679   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:47.111744   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:47.150700   80762 cri.go:89] found id: ""
	I0612 21:41:47.150725   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.150733   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:47.150739   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:47.150787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:47.189547   80762 cri.go:89] found id: ""
	I0612 21:41:47.189576   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.189586   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:47.189593   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:47.189660   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:47.229482   80762 cri.go:89] found id: ""
	I0612 21:41:47.229510   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.229522   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:47.229530   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:47.229599   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:47.266798   80762 cri.go:89] found id: ""
	I0612 21:41:47.266826   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.266837   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:47.266844   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:47.266906   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:47.302256   80762 cri.go:89] found id: ""
	I0612 21:41:47.302280   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.302287   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:47.302295   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:47.302306   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:47.354485   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:47.354526   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:47.368689   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:47.368713   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:47.438219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:47.438244   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:47.438257   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:47.514199   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:47.514227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.165541   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:48.664957   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.512922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.012630   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.817136   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.317083   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.056394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:50.069348   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:50.069482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:50.106057   80762 cri.go:89] found id: ""
	I0612 21:41:50.106087   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.106097   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:50.106104   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:50.106162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:50.144532   80762 cri.go:89] found id: ""
	I0612 21:41:50.144560   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.144571   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:50.144579   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:50.144662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:50.184549   80762 cri.go:89] found id: ""
	I0612 21:41:50.184575   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.184583   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:50.184588   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:50.184648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:50.228520   80762 cri.go:89] found id: ""
	I0612 21:41:50.228555   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.228569   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:50.228578   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:50.228649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:50.265697   80762 cri.go:89] found id: ""
	I0612 21:41:50.265726   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.265737   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:50.265744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:50.265818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:50.301353   80762 cri.go:89] found id: ""
	I0612 21:41:50.301393   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.301410   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:50.301416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:50.301477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:50.337273   80762 cri.go:89] found id: ""
	I0612 21:41:50.337298   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.337306   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:50.337313   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:50.337374   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:50.383090   80762 cri.go:89] found id: ""
	I0612 21:41:50.383116   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.383126   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:50.383138   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:50.383151   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:50.454193   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:50.454240   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:50.477753   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:50.477779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:50.544052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:50.544075   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:50.544091   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:50.626441   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:50.626480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:50.666068   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.666287   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.013142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.512869   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.318942   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.816918   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.818011   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:53.168599   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:53.181682   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:53.181764   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:53.228060   80762 cri.go:89] found id: ""
	I0612 21:41:53.228096   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.228107   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:53.228115   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:53.228176   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:53.264867   80762 cri.go:89] found id: ""
	I0612 21:41:53.264890   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.264898   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:53.264903   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:53.264950   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:53.298351   80762 cri.go:89] found id: ""
	I0612 21:41:53.298378   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.298386   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:53.298392   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:53.298448   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:53.335888   80762 cri.go:89] found id: ""
	I0612 21:41:53.335910   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.335917   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:53.335922   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:53.335980   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:53.376131   80762 cri.go:89] found id: ""
	I0612 21:41:53.376166   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.376175   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:53.376183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:53.376240   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:53.412059   80762 cri.go:89] found id: ""
	I0612 21:41:53.412082   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.412088   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:53.412097   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:53.412142   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:53.446776   80762 cri.go:89] found id: ""
	I0612 21:41:53.446805   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.446816   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:53.446823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:53.446894   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:53.482411   80762 cri.go:89] found id: ""
	I0612 21:41:53.482433   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.482441   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:53.482449   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:53.482460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:53.522419   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:53.522448   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:53.573107   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:53.573141   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:53.587101   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:53.587147   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:53.665631   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:53.665660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:53.665675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.242482   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:56.255606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:56.255682   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:56.290837   80762 cri.go:89] found id: ""
	I0612 21:41:56.290861   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.290872   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:56.290880   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:56.290938   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:56.325429   80762 cri.go:89] found id: ""
	I0612 21:41:56.325458   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.325466   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:56.325471   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:56.325534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:56.359809   80762 cri.go:89] found id: ""
	I0612 21:41:56.359835   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.359845   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:56.359852   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:56.359912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:56.397775   80762 cri.go:89] found id: ""
	I0612 21:41:56.397803   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.397815   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:56.397823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:56.397884   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:56.433917   80762 cri.go:89] found id: ""
	I0612 21:41:56.433945   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.433956   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:56.433963   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:56.434028   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:56.467390   80762 cri.go:89] found id: ""
	I0612 21:41:56.467419   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.467429   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:56.467438   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:56.467496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:56.504014   80762 cri.go:89] found id: ""
	I0612 21:41:56.504048   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.504059   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:56.504067   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:56.504138   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:56.544157   80762 cri.go:89] found id: ""
	I0612 21:41:56.544187   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.544198   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:56.544209   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:56.544224   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:56.595431   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:56.595462   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:56.608897   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:56.608936   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:56.682706   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:56.682735   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:56.682749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.762598   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:56.762634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:55.166152   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:57.167363   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.666265   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.514832   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:58.515091   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.317285   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.818345   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.302898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:59.317901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:59.317976   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:59.360136   80762 cri.go:89] found id: ""
	I0612 21:41:59.360164   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.360174   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:59.360181   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:59.360249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:59.397205   80762 cri.go:89] found id: ""
	I0612 21:41:59.397233   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.397244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:59.397252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:59.397312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:59.437063   80762 cri.go:89] found id: ""
	I0612 21:41:59.437093   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.437105   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:59.437113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:59.437172   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:59.472800   80762 cri.go:89] found id: ""
	I0612 21:41:59.472827   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.472835   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:59.472843   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:59.472904   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:59.509452   80762 cri.go:89] found id: ""
	I0612 21:41:59.509474   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.509482   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:59.509487   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:59.509534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:59.546121   80762 cri.go:89] found id: ""
	I0612 21:41:59.546151   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.546162   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:59.546170   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:59.546231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:59.582983   80762 cri.go:89] found id: ""
	I0612 21:41:59.583007   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.583014   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:59.583020   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:59.583065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:59.621110   80762 cri.go:89] found id: ""
	I0612 21:41:59.621148   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.621160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:59.621171   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:59.621187   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:59.673113   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:59.673143   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:59.688106   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:59.688171   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:59.767653   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:59.767678   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:59.767692   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:59.848467   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:59.848507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.391324   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:02.406543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:02.406621   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:02.442225   80762 cri.go:89] found id: ""
	I0612 21:42:02.442255   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.442265   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:02.442273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:02.442341   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:02.479445   80762 cri.go:89] found id: ""
	I0612 21:42:02.479476   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.479487   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:02.479495   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:02.479557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:02.517654   80762 cri.go:89] found id: ""
	I0612 21:42:02.517685   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.517697   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:02.517705   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:02.517775   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:02.562743   80762 cri.go:89] found id: ""
	I0612 21:42:02.562777   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.562788   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:02.562807   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:02.562873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:02.597775   80762 cri.go:89] found id: ""
	I0612 21:42:02.597805   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.597816   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:02.597824   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:02.597886   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:02.633869   80762 cri.go:89] found id: ""
	I0612 21:42:02.633901   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.633913   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:02.633921   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:02.633979   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:02.671931   80762 cri.go:89] found id: ""
	I0612 21:42:02.671962   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.671974   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:02.671982   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:02.672044   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:02.709162   80762 cri.go:89] found id: ""
	I0612 21:42:02.709192   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.709204   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:02.709214   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:02.709228   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:02.722937   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:02.722967   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:02.798249   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:02.798275   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:02.798292   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:02.165664   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.012458   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:03.513414   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.317221   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.818062   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:02.876339   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:02.876376   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.913080   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:02.913106   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.464433   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:05.478249   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:05.478326   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:05.520742   80762 cri.go:89] found id: ""
	I0612 21:42:05.520765   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.520772   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:05.520778   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:05.520840   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:05.564864   80762 cri.go:89] found id: ""
	I0612 21:42:05.564896   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.564904   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:05.564911   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:05.564956   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:05.602917   80762 cri.go:89] found id: ""
	I0612 21:42:05.602942   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.602950   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:05.602956   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:05.603040   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:05.645073   80762 cri.go:89] found id: ""
	I0612 21:42:05.645104   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.645112   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:05.645119   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:05.645166   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:05.684133   80762 cri.go:89] found id: ""
	I0612 21:42:05.684165   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.684176   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:05.684184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:05.684249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:05.721461   80762 cri.go:89] found id: ""
	I0612 21:42:05.721489   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.721499   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:05.721506   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:05.721573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:05.756710   80762 cri.go:89] found id: ""
	I0612 21:42:05.756744   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.756755   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:05.756763   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:05.756814   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:05.792182   80762 cri.go:89] found id: ""
	I0612 21:42:05.792210   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.792220   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:05.792230   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:05.792245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:05.836597   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:05.836632   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.888704   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:05.888742   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:05.903354   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:05.903387   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:05.976146   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:05.976169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:05.976183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:06.664789   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.666830   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.013885   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.512997   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:09.316398   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.317014   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.559612   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:08.573592   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:08.573648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:08.613347   80762 cri.go:89] found id: ""
	I0612 21:42:08.613373   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.613381   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:08.613387   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:08.613449   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:08.650606   80762 cri.go:89] found id: ""
	I0612 21:42:08.650634   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.650643   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:08.650648   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:08.650692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:08.687056   80762 cri.go:89] found id: ""
	I0612 21:42:08.687087   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.687097   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:08.687105   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:08.687191   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:08.723112   80762 cri.go:89] found id: ""
	I0612 21:42:08.723138   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.723146   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:08.723161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:08.723238   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:08.764772   80762 cri.go:89] found id: ""
	I0612 21:42:08.764801   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.764812   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:08.764820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:08.764888   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:08.801914   80762 cri.go:89] found id: ""
	I0612 21:42:08.801944   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.801954   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:08.801962   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:08.802025   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:08.837991   80762 cri.go:89] found id: ""
	I0612 21:42:08.838017   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.838025   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:08.838030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:08.838084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:08.874977   80762 cri.go:89] found id: ""
	I0612 21:42:08.875016   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.875027   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:08.875039   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:08.875058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:08.931628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:08.931659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:08.946763   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:08.946791   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:09.028039   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:09.028061   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:09.028079   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:09.104350   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:09.104406   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.645114   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:11.659382   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:11.659455   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:11.702205   80762 cri.go:89] found id: ""
	I0612 21:42:11.702236   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.702246   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:11.702254   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:11.702309   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:11.748328   80762 cri.go:89] found id: ""
	I0612 21:42:11.748350   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.748357   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:11.748362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:11.748408   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:11.788980   80762 cri.go:89] found id: ""
	I0612 21:42:11.789009   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.789020   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:11.789027   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:11.789083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:11.829886   80762 cri.go:89] found id: ""
	I0612 21:42:11.829910   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.829920   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:11.829928   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:11.830006   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:11.870088   80762 cri.go:89] found id: ""
	I0612 21:42:11.870120   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.870131   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:11.870138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:11.870201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:11.907862   80762 cri.go:89] found id: ""
	I0612 21:42:11.907893   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.907905   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:11.907913   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:11.907973   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:11.947773   80762 cri.go:89] found id: ""
	I0612 21:42:11.947798   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.947808   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:11.947816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:11.947876   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:11.987806   80762 cri.go:89] found id: ""
	I0612 21:42:11.987837   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.987848   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:11.987859   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:11.987878   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:12.043451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:12.043481   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:12.057946   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:12.057980   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:12.134265   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:12.134298   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:12.134310   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:12.221276   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:12.221315   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.165305   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.165846   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.012728   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512292   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512327   80243 pod_ready.go:81] duration metric: took 4m0.006424182s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:13.512336   80243 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0612 21:42:13.512343   80243 pod_ready.go:38] duration metric: took 4m5.595554955s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:13.512359   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:42:13.512383   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:13.512428   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:13.571855   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:13.571882   80243 cri.go:89] found id: ""
	I0612 21:42:13.571892   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:13.571942   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.576505   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:13.576557   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:13.614768   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:13.614792   80243 cri.go:89] found id: ""
	I0612 21:42:13.614799   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:13.614847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.619276   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:13.619342   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:13.662832   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:13.662856   80243 cri.go:89] found id: ""
	I0612 21:42:13.662866   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:13.662931   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.667956   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:13.668031   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:13.710456   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.710479   80243 cri.go:89] found id: ""
	I0612 21:42:13.710487   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:13.710540   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.715411   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:13.715480   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:13.759924   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.759952   80243 cri.go:89] found id: ""
	I0612 21:42:13.759965   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:13.760027   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.764854   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:13.764919   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:13.804802   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:13.804826   80243 cri.go:89] found id: ""
	I0612 21:42:13.804834   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:13.804891   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.809421   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:13.809489   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:13.846580   80243 cri.go:89] found id: ""
	I0612 21:42:13.846615   80243 logs.go:276] 0 containers: []
	W0612 21:42:13.846625   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:13.846633   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:13.846685   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:13.893480   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.893504   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:13.893510   80243 cri.go:89] found id: ""
	I0612 21:42:13.893523   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:13.893571   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.898530   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.905072   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:13.905100   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.942165   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:13.942195   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.981852   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:13.981882   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:14.018431   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:14.018457   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:14.067616   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:14.067645   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:14.082853   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:14.082886   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:14.220156   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:14.220188   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:14.274395   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:14.274430   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:14.319087   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:14.319116   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:14.834792   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:14.834831   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:14.893213   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:14.893252   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:14.957423   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:14.957466   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:15.013756   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:15.013803   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.318558   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:15.318904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:14.760949   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:14.775242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:14.775356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:14.818818   80762 cri.go:89] found id: ""
	I0612 21:42:14.818847   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.818856   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:14.818863   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:14.818931   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:14.859106   80762 cri.go:89] found id: ""
	I0612 21:42:14.859146   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.859157   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:14.859164   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:14.859247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:14.894993   80762 cri.go:89] found id: ""
	I0612 21:42:14.895016   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.895026   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:14.895037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:14.895087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:14.943534   80762 cri.go:89] found id: ""
	I0612 21:42:14.943561   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.943572   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:14.943579   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:14.943645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:14.985243   80762 cri.go:89] found id: ""
	I0612 21:42:14.985267   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.985274   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:14.985280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:14.985328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:15.029253   80762 cri.go:89] found id: ""
	I0612 21:42:15.029286   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.029297   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:15.029305   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:15.029371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:15.063471   80762 cri.go:89] found id: ""
	I0612 21:42:15.063499   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.063510   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:15.063517   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:15.063580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:15.101152   80762 cri.go:89] found id: ""
	I0612 21:42:15.101181   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.101201   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:15.101212   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:15.101227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:15.178398   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:15.178416   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:15.178429   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:15.255420   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:15.255468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:15.295492   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:15.295519   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:15.345010   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:15.345051   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:15.166546   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.666141   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:19.672626   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.561453   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.579672   80243 api_server.go:72] duration metric: took 4m17.376220984s to wait for apiserver process to appear ...
	I0612 21:42:17.579702   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:42:17.579741   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.579787   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.620290   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:17.620318   80243 cri.go:89] found id: ""
	I0612 21:42:17.620325   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:17.620387   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.624598   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.624658   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.665957   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:17.665985   80243 cri.go:89] found id: ""
	I0612 21:42:17.665995   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:17.666056   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.671143   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.671222   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:17.717377   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.717396   80243 cri.go:89] found id: ""
	I0612 21:42:17.717404   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:17.717459   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.721710   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:17.721774   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:17.762712   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:17.762739   80243 cri.go:89] found id: ""
	I0612 21:42:17.762749   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:17.762807   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.767258   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:17.767329   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:17.803905   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:17.803956   80243 cri.go:89] found id: ""
	I0612 21:42:17.803969   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:17.804034   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.808260   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:17.808323   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:17.847402   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:17.847432   80243 cri.go:89] found id: ""
	I0612 21:42:17.847441   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:17.847502   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.851685   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:17.851757   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:17.897026   80243 cri.go:89] found id: ""
	I0612 21:42:17.897051   80243 logs.go:276] 0 containers: []
	W0612 21:42:17.897059   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:17.897065   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:17.897122   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:17.953764   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:17.953793   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:17.953799   80243 cri.go:89] found id: ""
	I0612 21:42:17.953808   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:17.953875   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.959822   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.965103   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:17.965127   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:18.089205   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:18.089229   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:18.153823   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:18.153876   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:18.198010   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.198037   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.255380   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.255410   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.271692   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:18.271725   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:18.318018   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:18.318049   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:18.379352   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:18.379386   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:18.437854   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:18.437884   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:18.487618   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.487650   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.934735   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.934784   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:18.983010   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:18.983050   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:19.043569   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:19.043605   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.819422   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:20.315423   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.862640   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.879256   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.879333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.918910   80762 cri.go:89] found id: ""
	I0612 21:42:17.918940   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.918951   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:17.918958   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.919018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.959701   80762 cri.go:89] found id: ""
	I0612 21:42:17.959726   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.959734   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:17.959739   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.959820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:18.005096   80762 cri.go:89] found id: ""
	I0612 21:42:18.005125   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.005142   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:18.005150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:18.005211   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:18.046877   80762 cri.go:89] found id: ""
	I0612 21:42:18.046907   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.046919   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:18.046927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:18.046992   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:18.087907   80762 cri.go:89] found id: ""
	I0612 21:42:18.087934   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.087946   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:18.087953   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:18.088016   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:18.139423   80762 cri.go:89] found id: ""
	I0612 21:42:18.139453   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.139464   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:18.139473   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:18.139535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:18.180433   80762 cri.go:89] found id: ""
	I0612 21:42:18.180459   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.180469   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:18.180476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:18.180537   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:18.220966   80762 cri.go:89] found id: ""
	I0612 21:42:18.220996   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.221005   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:18.221015   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.221033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.276006   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.276031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.290975   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:18.291026   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:18.369318   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:18.369345   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.369359   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.451336   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.451380   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.016353   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:21.030544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.030611   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.072558   80762 cri.go:89] found id: ""
	I0612 21:42:21.072583   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.072591   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:21.072596   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.072649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.106320   80762 cri.go:89] found id: ""
	I0612 21:42:21.106352   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.106364   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:21.106372   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.106431   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.139155   80762 cri.go:89] found id: ""
	I0612 21:42:21.139201   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.139212   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:21.139220   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.139285   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.178731   80762 cri.go:89] found id: ""
	I0612 21:42:21.178762   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.178772   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:21.178779   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.178838   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.213606   80762 cri.go:89] found id: ""
	I0612 21:42:21.213635   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.213645   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:21.213652   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.213707   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.250966   80762 cri.go:89] found id: ""
	I0612 21:42:21.250993   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.251009   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:21.251017   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.251084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.289434   80762 cri.go:89] found id: ""
	I0612 21:42:21.289457   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.289465   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.289474   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:21.289520   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:21.329028   80762 cri.go:89] found id: ""
	I0612 21:42:21.329058   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.329069   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:21.329080   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:21.329098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:21.342621   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:21.342648   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:21.418742   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:21.418766   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:21.418779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:21.493909   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:21.493944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.534693   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:21.534723   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.165337   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.166122   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:21.581443   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:42:21.586756   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:42:21.587670   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:42:21.587691   80243 api_server.go:131] duration metric: took 4.007982669s to wait for apiserver health ...
	I0612 21:42:21.587699   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:42:21.587720   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.587761   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.627942   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:21.627965   80243 cri.go:89] found id: ""
	I0612 21:42:21.627974   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:21.628036   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.632308   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.632380   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.674453   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:21.674474   80243 cri.go:89] found id: ""
	I0612 21:42:21.674482   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:21.674539   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.679303   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.679376   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.717454   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:21.717483   80243 cri.go:89] found id: ""
	I0612 21:42:21.717492   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:21.717555   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.722113   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.722176   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.758752   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:21.758780   80243 cri.go:89] found id: ""
	I0612 21:42:21.758790   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:21.758847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.763397   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.763465   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.802552   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:21.802574   80243 cri.go:89] found id: ""
	I0612 21:42:21.802583   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:21.802641   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.807570   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.807633   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.855093   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:21.855118   80243 cri.go:89] found id: ""
	I0612 21:42:21.855128   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:21.855212   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.860163   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.860231   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.907934   80243 cri.go:89] found id: ""
	I0612 21:42:21.907960   80243 logs.go:276] 0 containers: []
	W0612 21:42:21.907969   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.907977   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:21.908046   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:21.950085   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:21.950114   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:21.950120   80243 cri.go:89] found id: ""
	I0612 21:42:21.950128   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:21.950186   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.955633   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.960017   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:21.960038   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:22.015659   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:22.015708   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:22.074063   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:22.074093   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:22.113545   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:22.113581   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:22.152550   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:22.152583   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:22.556816   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:22.556856   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:22.602506   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:22.602542   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.655545   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:22.655577   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:22.775731   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:22.775775   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:22.827447   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:22.827476   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:22.864866   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:22.864898   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:22.903885   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:22.903912   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:22.947166   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:22.947214   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:25.472711   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:42:25.472743   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.472750   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.472755   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.472761   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.472765   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.472770   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.472777   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.472783   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.472794   80243 system_pods.go:74] duration metric: took 3.885088008s to wait for pod list to return data ...
	I0612 21:42:25.472803   80243 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:42:25.475046   80243 default_sa.go:45] found service account: "default"
	I0612 21:42:25.475072   80243 default_sa.go:55] duration metric: took 2.260179ms for default service account to be created ...
	I0612 21:42:25.475082   80243 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:42:25.479903   80243 system_pods.go:86] 8 kube-system pods found
	I0612 21:42:25.479925   80243 system_pods.go:89] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.479931   80243 system_pods.go:89] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.479935   80243 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.479940   80243 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.479944   80243 system_pods.go:89] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.479950   80243 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.479959   80243 system_pods.go:89] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.479969   80243 system_pods.go:89] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.479979   80243 system_pods.go:126] duration metric: took 4.890624ms to wait for k8s-apps to be running ...
	I0612 21:42:25.479990   80243 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:42:25.480037   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:25.496529   80243 system_svc.go:56] duration metric: took 16.534285ms WaitForService to wait for kubelet
	I0612 21:42:25.496549   80243 kubeadm.go:576] duration metric: took 4m25.293104149s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:42:25.496565   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:42:25.499277   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:42:25.499293   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:42:25.499304   80243 node_conditions.go:105] duration metric: took 2.734965ms to run NodePressure ...
	I0612 21:42:25.499314   80243 start.go:240] waiting for startup goroutines ...
	I0612 21:42:25.499320   80243 start.go:245] waiting for cluster config update ...
	I0612 21:42:25.499339   80243 start.go:254] writing updated cluster config ...
	I0612 21:42:25.499628   80243 ssh_runner.go:195] Run: rm -f paused
	I0612 21:42:25.547780   80243 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:42:25.549693   80243 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-376087" cluster and "default" namespace by default
	I0612 21:42:22.317078   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.317826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:26.818102   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.086466   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:24.101820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:24.101877   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:24.145732   80762 cri.go:89] found id: ""
	I0612 21:42:24.145757   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.145767   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:24.145774   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:24.145832   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:24.182765   80762 cri.go:89] found id: ""
	I0612 21:42:24.182788   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.182795   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:24.182801   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:24.182889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:24.235093   80762 cri.go:89] found id: ""
	I0612 21:42:24.235121   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.235129   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:24.235134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:24.235208   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:24.269788   80762 cri.go:89] found id: ""
	I0612 21:42:24.269809   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.269816   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:24.269822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:24.269867   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:24.306594   80762 cri.go:89] found id: ""
	I0612 21:42:24.306620   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.306628   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:24.306634   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:24.306693   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:24.343766   80762 cri.go:89] found id: ""
	I0612 21:42:24.343786   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.343795   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:24.343802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:24.343858   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:24.384417   80762 cri.go:89] found id: ""
	I0612 21:42:24.384447   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.384457   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:24.384464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:24.384524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:24.424935   80762 cri.go:89] found id: ""
	I0612 21:42:24.424958   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.424965   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:24.424974   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:24.424988   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:24.499737   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:24.499771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:24.537631   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:24.537667   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:24.593743   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:24.593779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:24.608078   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:24.608107   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:24.679729   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.180828   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:27.195484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:27.195552   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:27.235725   80762 cri.go:89] found id: ""
	I0612 21:42:27.235750   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.235761   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:27.235768   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:27.235816   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:27.279763   80762 cri.go:89] found id: ""
	I0612 21:42:27.279795   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.279806   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:27.279814   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:27.279875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:27.320510   80762 cri.go:89] found id: ""
	I0612 21:42:27.320534   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.320543   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:27.320554   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:27.320641   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:27.359195   80762 cri.go:89] found id: ""
	I0612 21:42:27.359227   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.359239   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:27.359247   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:27.359312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:27.394977   80762 cri.go:89] found id: ""
	I0612 21:42:27.395004   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.395015   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:27.395033   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:27.395099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:27.431905   80762 cri.go:89] found id: ""
	I0612 21:42:27.431925   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.431933   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:27.431945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:27.431990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:27.469929   80762 cri.go:89] found id: ""
	I0612 21:42:27.469954   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.469961   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:27.469967   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:27.470024   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:27.505128   80762 cri.go:89] found id: ""
	I0612 21:42:27.505153   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.505160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:27.505169   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:27.505180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:27.556739   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:27.556771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:27.572730   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:27.572757   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:27.646797   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.646819   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:27.646836   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:27.726554   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:27.726599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:26.665496   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.166323   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.316302   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:31.316334   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:30.268770   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:30.282575   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:30.282635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:30.321243   80762 cri.go:89] found id: ""
	I0612 21:42:30.321276   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.321288   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:30.321295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:30.321342   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:30.359403   80762 cri.go:89] found id: ""
	I0612 21:42:30.359432   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.359443   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:30.359451   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:30.359505   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:30.395967   80762 cri.go:89] found id: ""
	I0612 21:42:30.396006   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.396015   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:30.396028   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:30.396087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:30.438093   80762 cri.go:89] found id: ""
	I0612 21:42:30.438123   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.438132   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:30.438138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:30.438192   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:30.476859   80762 cri.go:89] found id: ""
	I0612 21:42:30.476888   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.476898   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:30.476905   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:30.476968   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:30.512998   80762 cri.go:89] found id: ""
	I0612 21:42:30.513026   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.513037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:30.513045   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:30.513106   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:30.548822   80762 cri.go:89] found id: ""
	I0612 21:42:30.548847   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.548855   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:30.548861   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:30.548908   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:30.584385   80762 cri.go:89] found id: ""
	I0612 21:42:30.584417   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.584426   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:30.584439   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:30.584454   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:30.685995   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:30.686019   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:30.686030   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:30.778789   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:30.778827   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:30.819467   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:30.819511   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:30.872563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:30.872599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:31.659828   80404 pod_ready.go:81] duration metric: took 4m0.000909177s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:31.659857   80404 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:42:31.659875   80404 pod_ready.go:38] duration metric: took 4m13.021158077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:31.659904   80404 kubeadm.go:591] duration metric: took 4m20.257268424s to restartPrimaryControlPlane
	W0612 21:42:31.659968   80404 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:31.660002   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:33.316457   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:35.316525   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:33.387831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:33.401663   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:33.401740   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:33.439690   80762 cri.go:89] found id: ""
	I0612 21:42:33.439723   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.439735   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:33.439743   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:33.439805   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:33.480330   80762 cri.go:89] found id: ""
	I0612 21:42:33.480357   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.480365   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:33.480371   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:33.480422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:33.520367   80762 cri.go:89] found id: ""
	I0612 21:42:33.520396   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.520407   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:33.520415   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:33.520476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:33.556859   80762 cri.go:89] found id: ""
	I0612 21:42:33.556892   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.556904   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:33.556911   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:33.556963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:33.595982   80762 cri.go:89] found id: ""
	I0612 21:42:33.596014   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.596024   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:33.596030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:33.596091   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:33.630942   80762 cri.go:89] found id: ""
	I0612 21:42:33.630974   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.630986   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:33.630994   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:33.631055   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:33.671649   80762 cri.go:89] found id: ""
	I0612 21:42:33.671676   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.671684   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:33.671690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:33.671734   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:33.716664   80762 cri.go:89] found id: ""
	I0612 21:42:33.716690   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.716700   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:33.716711   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:33.716726   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:33.734168   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:33.734198   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:33.826469   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:33.826491   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:33.826507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:33.915109   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:33.915142   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:33.957969   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:33.958007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:36.515258   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:36.529603   80762 kubeadm.go:591] duration metric: took 4m4.234271001s to restartPrimaryControlPlane
	W0612 21:42:36.529688   80762 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:36.529719   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:37.316720   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:39.317633   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.816783   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.545629   80762 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.01588354s)
	I0612 21:42:41.545734   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:41.561025   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:42:41.572788   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:42:41.583027   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:42:41.583052   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:42:41.583095   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:42:41.593433   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:42:41.593502   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:42:41.603944   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:42:41.613382   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:42:41.613432   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:42:41.622874   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.632270   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:42:41.632370   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.642072   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:42:41.652120   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:42:41.652194   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:42:41.662684   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:42:41.894903   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:42:43.817122   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:45.817164   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:47.817201   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:50.316134   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:52.317090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:54.318066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:56.816196   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:58.817948   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:01.316826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:03.728120   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.068094257s)
	I0612 21:43:03.728183   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:03.744990   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:03.755365   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:03.765154   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:03.765181   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:03.765226   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:03.775246   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:03.775304   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:03.785389   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:03.794999   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:03.795046   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:03.804771   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.814137   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:03.814187   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.824449   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:03.833631   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:03.833687   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:03.843203   80404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:03.895827   80404 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:03.895927   80404 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:04.040495   80404 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:04.040666   80404 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:04.040822   80404 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:04.252894   80404 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:04.254835   80404 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:04.254952   80404 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:04.255060   80404 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:04.255219   80404 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:04.255296   80404 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:04.255399   80404 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:04.255490   80404 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:04.255589   80404 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:04.255692   80404 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:04.255794   80404 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:04.255885   80404 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:04.255923   80404 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:04.255978   80404 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:04.460505   80404 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:04.640215   80404 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:04.722455   80404 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:04.862670   80404 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:05.112478   80404 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:05.113163   80404 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:05.115573   80404 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:03.817386   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:06.317207   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:05.117650   80404 out.go:204]   - Booting up control plane ...
	I0612 21:43:05.117758   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:05.117887   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:05.119410   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:05.139412   80404 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:05.139504   80404 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:05.139575   80404 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:05.268539   80404 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:05.268636   80404 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:43:05.771267   80404 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.898809ms
	I0612 21:43:05.771364   80404 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:43:11.274484   80404 kubeadm.go:309] [api-check] The API server is healthy after 5.503111655s
	I0612 21:43:11.291273   80404 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:43:11.319349   80404 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:43:11.357447   80404 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:43:11.357709   80404 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-591460 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:43:11.368936   80404 kubeadm.go:309] [bootstrap-token] Using token: 0iiegq.ujvrnknfmyshffxu
	I0612 21:43:08.816875   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:10.817031   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:11.370411   80404 out.go:204]   - Configuring RBAC rules ...
	I0612 21:43:11.370567   80404 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:43:11.375891   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:43:11.388345   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:43:11.392726   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:43:11.396867   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:43:11.401212   80404 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:43:11.683506   80404 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:43:12.114832   80404 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:43:12.683696   80404 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:43:12.683724   80404 kubeadm.go:309] 
	I0612 21:43:12.683811   80404 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:43:12.683823   80404 kubeadm.go:309] 
	I0612 21:43:12.683938   80404 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:43:12.683958   80404 kubeadm.go:309] 
	I0612 21:43:12.684002   80404 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:43:12.684070   80404 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:43:12.684129   80404 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:43:12.684146   80404 kubeadm.go:309] 
	I0612 21:43:12.684232   80404 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:43:12.684247   80404 kubeadm.go:309] 
	I0612 21:43:12.684317   80404 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:43:12.684330   80404 kubeadm.go:309] 
	I0612 21:43:12.684398   80404 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:43:12.684502   80404 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:43:12.684595   80404 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:43:12.684604   80404 kubeadm.go:309] 
	I0612 21:43:12.684700   80404 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:43:12.684807   80404 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:43:12.684816   80404 kubeadm.go:309] 
	I0612 21:43:12.684915   80404 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685061   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:43:12.685105   80404 kubeadm.go:309] 	--control-plane 
	I0612 21:43:12.685116   80404 kubeadm.go:309] 
	I0612 21:43:12.685237   80404 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:43:12.685248   80404 kubeadm.go:309] 
	I0612 21:43:12.685352   80404 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685509   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:43:12.685622   80404 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:43:12.685831   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:43:12.685848   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:43:12.687835   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:43:12.689100   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:43:12.700384   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:43:12.720228   80404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:43:12.720305   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.720330   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-591460 minikube.k8s.io/updated_at=2024_06_12T21_43_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=embed-certs-591460 minikube.k8s.io/primary=true
	I0612 21:43:12.751866   80404 ops.go:34] apiserver oom_adj: -16
	I0612 21:43:12.927644   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.428393   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.928221   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:14.428286   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.817125   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:15.316899   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:14.928273   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.428431   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.927968   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.428202   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.927882   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.428544   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.927844   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.428385   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.928105   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:19.428421   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.317080   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.317419   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:21.816670   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.928638   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.428310   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.928565   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.428377   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.928158   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.428356   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.927863   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.427955   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.928226   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.427823   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.928404   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.428367   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.514417   80404 kubeadm.go:1107] duration metric: took 12.794169259s to wait for elevateKubeSystemPrivileges
	W0612 21:43:25.514460   80404 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:43:25.514470   80404 kubeadm.go:393] duration metric: took 5m14.162212832s to StartCluster
	I0612 21:43:25.514490   80404 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.514576   80404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:43:25.518597   80404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.518811   80404 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:43:25.520571   80404 out.go:177] * Verifying Kubernetes components...
	I0612 21:43:25.518903   80404 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:43:25.519030   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:43:25.521967   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:43:25.522001   80404 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-591460"
	I0612 21:43:25.522043   80404 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-591460"
	W0612 21:43:25.522056   80404 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:43:25.522053   80404 addons.go:69] Setting default-storageclass=true in profile "embed-certs-591460"
	I0612 21:43:25.522089   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522100   80404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-591460"
	I0612 21:43:25.522089   80404 addons.go:69] Setting metrics-server=true in profile "embed-certs-591460"
	I0612 21:43:25.522158   80404 addons.go:234] Setting addon metrics-server=true in "embed-certs-591460"
	W0612 21:43:25.522170   80404 addons.go:243] addon metrics-server should already be in state true
	I0612 21:43:25.522196   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522502   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522509   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522532   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522535   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522585   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522611   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.538989   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0612 21:43:25.539032   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0612 21:43:25.539591   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.539592   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.540199   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540222   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540293   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540323   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540610   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.540736   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.541265   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541281   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541312   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.541431   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.542393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0612 21:43:25.543042   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.543604   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.543643   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.543997   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.544208   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.547823   80404 addons.go:234] Setting addon default-storageclass=true in "embed-certs-591460"
	W0612 21:43:25.547849   80404 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:43:25.547878   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.548237   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.548272   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.558486   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46589
	I0612 21:43:25.558934   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.559936   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.559967   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.560387   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.560600   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.560728   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0612 21:43:25.561116   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.561595   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.561610   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.561928   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.562198   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.562832   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565065   80404 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:43:25.563946   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0612 21:43:25.566521   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:43:25.566535   80404 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:43:25.566582   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.568114   80404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:43:24.316660   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:25.810857   80157 pod_ready.go:81] duration metric: took 4m0.000926725s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	E0612 21:43:25.810888   80157 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:43:25.810936   80157 pod_ready.go:38] duration metric: took 4m14.539121336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.810971   80157 kubeadm.go:591] duration metric: took 4m21.56451584s to restartPrimaryControlPlane
	W0612 21:43:25.811042   80157 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:43:25.811074   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:43:25.567032   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.569772   80404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:25.569794   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:43:25.569812   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.570271   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.570291   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.570363   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.570698   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.571498   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.571514   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.571539   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.571691   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.571861   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.572032   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.572851   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.572894   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.573962   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574403   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.574429   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574762   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.574974   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.575164   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.575464   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.589637   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0612 21:43:25.590155   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.591035   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.591059   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.591596   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.591845   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.593885   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.594095   80404 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:25.594112   80404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:43:25.594131   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.597769   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.598379   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598434   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.598635   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.598766   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.598860   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.762134   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:43:25.818663   80404 node_ready.go:35] waiting up to 6m0s for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830753   80404 node_ready.go:49] node "embed-certs-591460" has status "Ready":"True"
	I0612 21:43:25.830780   80404 node_ready.go:38] duration metric: took 12.086962ms for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830792   80404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.841084   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:25.929395   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:43:25.929427   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:43:26.001489   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:26.016234   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:43:26.016275   80404 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:43:26.030851   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:26.062707   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:26.062741   80404 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:43:26.157461   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:27.281342   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.279809959s)
	I0612 21:43:27.281364   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.250478112s)
	I0612 21:43:27.281392   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281405   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281408   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281420   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281712   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281730   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281739   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281748   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281861   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281916   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281933   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281942   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.283567   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283582   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283592   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283597   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.283728   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283740   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.324600   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.324625   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.324937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.324941   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.324965   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.366096   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:27.366126   80404 pod_ready.go:81] duration metric: took 1.52501871s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.366139   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.530900   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.373391416s)
	I0612 21:43:27.530973   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.530987   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.531382   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.531399   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.531406   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.531419   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.531428   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.533199   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.533212   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.533226   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.533238   80404 addons.go:475] Verifying addon metrics-server=true in "embed-certs-591460"
	I0612 21:43:27.534895   80404 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:43:27.536129   80404 addons.go:510] duration metric: took 2.017228253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:43:28.373835   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.373862   80404 pod_ready.go:81] duration metric: took 1.007715736s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.373870   80404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379042   80404 pod_ready.go:92] pod "etcd-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.379065   80404 pod_ready.go:81] duration metric: took 5.188395ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379078   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384218   80404 pod_ready.go:92] pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.384233   80404 pod_ready.go:81] duration metric: took 5.148944ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384241   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389023   80404 pod_ready.go:92] pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.389046   80404 pod_ready.go:81] duration metric: took 4.78947ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389056   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623880   80404 pod_ready.go:92] pod "kube-proxy-5l2wz" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.623902   80404 pod_ready.go:81] duration metric: took 234.83854ms for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623910   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022477   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:29.022508   80404 pod_ready.go:81] duration metric: took 398.590821ms for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022522   80404 pod_ready.go:38] duration metric: took 3.191712664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:29.022539   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:43:29.022602   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:43:29.038776   80404 api_server.go:72] duration metric: took 3.51993276s to wait for apiserver process to appear ...
	I0612 21:43:29.038805   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:43:29.038827   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:43:29.045455   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:43:29.047050   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:43:29.047072   80404 api_server.go:131] duration metric: took 8.260077ms to wait for apiserver health ...
	I0612 21:43:29.047080   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:43:29.226569   80404 system_pods.go:59] 9 kube-system pods found
	I0612 21:43:29.226603   80404 system_pods.go:61] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.226611   80404 system_pods.go:61] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.226618   80404 system_pods.go:61] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.226624   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.226631   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.226636   80404 system_pods.go:61] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.226642   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.226652   80404 system_pods.go:61] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.226659   80404 system_pods.go:61] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.226671   80404 system_pods.go:74] duration metric: took 179.583899ms to wait for pod list to return data ...
	I0612 21:43:29.226684   80404 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:43:29.422244   80404 default_sa.go:45] found service account: "default"
	I0612 21:43:29.422278   80404 default_sa.go:55] duration metric: took 195.585835ms for default service account to be created ...
	I0612 21:43:29.422290   80404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:43:29.626614   80404 system_pods.go:86] 9 kube-system pods found
	I0612 21:43:29.626650   80404 system_pods.go:89] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.626659   80404 system_pods.go:89] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.626667   80404 system_pods.go:89] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.626673   80404 system_pods.go:89] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.626680   80404 system_pods.go:89] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.626687   80404 system_pods.go:89] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.626693   80404 system_pods.go:89] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.626703   80404 system_pods.go:89] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.626714   80404 system_pods.go:89] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.626725   80404 system_pods.go:126] duration metric: took 204.428087ms to wait for k8s-apps to be running ...
	I0612 21:43:29.626737   80404 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:43:29.626793   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:29.642423   80404 system_svc.go:56] duration metric: took 15.67694ms WaitForService to wait for kubelet
	I0612 21:43:29.642457   80404 kubeadm.go:576] duration metric: took 4.123619864s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:43:29.642481   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:43:29.825804   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:43:29.825833   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:43:29.825846   80404 node_conditions.go:105] duration metric: took 183.359091ms to run NodePressure ...
	I0612 21:43:29.825860   80404 start.go:240] waiting for startup goroutines ...
	I0612 21:43:29.825868   80404 start.go:245] waiting for cluster config update ...
	I0612 21:43:29.825881   80404 start.go:254] writing updated cluster config ...
	I0612 21:43:29.826229   80404 ssh_runner.go:195] Run: rm -f paused
	I0612 21:43:29.878580   80404 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:43:29.880438   80404 out.go:177] * Done! kubectl is now configured to use "embed-certs-591460" cluster and "default" namespace by default
	I0612 21:43:57.924825   80157 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.113719509s)
	I0612 21:43:57.924912   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:57.942507   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:57.953901   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:57.964374   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:57.964396   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:57.964439   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:57.974281   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:57.974366   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:57.985000   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:57.995268   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:57.995346   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:58.005482   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.015598   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:58.015659   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.028582   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:58.038706   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:58.038756   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:58.051818   80157 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:58.110576   80157 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:58.110645   80157 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:58.274454   80157 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:58.274625   80157 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:58.274751   80157 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:58.484837   80157 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:58.486643   80157 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:58.486753   80157 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:58.486845   80157 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:58.486963   80157 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:58.487058   80157 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:58.487192   80157 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:58.487283   80157 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:58.487368   80157 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:58.487452   80157 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:58.487559   80157 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:58.487653   80157 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:58.487728   80157 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:58.487826   80157 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:58.644916   80157 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:58.789369   80157 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:58.924153   80157 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:59.044332   80157 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:59.352910   80157 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:59.353462   80157 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:59.356967   80157 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:59.359470   80157 out.go:204]   - Booting up control plane ...
	I0612 21:43:59.359596   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:59.359687   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:59.359792   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:59.378280   80157 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:59.379149   80157 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:59.379240   80157 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:59.521694   80157 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:59.521775   80157 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:44:00.036696   80157 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 514.972931ms
	I0612 21:44:00.036836   80157 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:44:05.539363   80157 kubeadm.go:309] [api-check] The API server is healthy after 5.502859715s
	I0612 21:44:05.552779   80157 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:44:05.567296   80157 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:44:05.603398   80157 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:44:05.603707   80157 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-087875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:44:05.619311   80157 kubeadm.go:309] [bootstrap-token] Using token: x2knjj.1kuv2wdowwsbztfg
	I0612 21:44:05.621026   80157 out.go:204]   - Configuring RBAC rules ...
	I0612 21:44:05.621180   80157 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:44:05.628474   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:44:05.642438   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:44:05.647606   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:44:05.651982   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:44:05.656129   80157 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:44:05.947680   80157 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:44:06.430716   80157 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:44:06.950446   80157 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:44:06.951688   80157 kubeadm.go:309] 
	I0612 21:44:06.951771   80157 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:44:06.951782   80157 kubeadm.go:309] 
	I0612 21:44:06.951857   80157 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:44:06.951866   80157 kubeadm.go:309] 
	I0612 21:44:06.951919   80157 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:44:06.952007   80157 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:44:06.952083   80157 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:44:06.952094   80157 kubeadm.go:309] 
	I0612 21:44:06.952160   80157 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:44:06.952172   80157 kubeadm.go:309] 
	I0612 21:44:06.952222   80157 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:44:06.952232   80157 kubeadm.go:309] 
	I0612 21:44:06.952285   80157 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:44:06.952375   80157 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:44:06.952460   80157 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:44:06.952476   80157 kubeadm.go:309] 
	I0612 21:44:06.952612   80157 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:44:06.952711   80157 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:44:06.952722   80157 kubeadm.go:309] 
	I0612 21:44:06.952819   80157 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.952933   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:44:06.952963   80157 kubeadm.go:309] 	--control-plane 
	I0612 21:44:06.952985   80157 kubeadm.go:309] 
	I0612 21:44:06.953100   80157 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:44:06.953114   80157 kubeadm.go:309] 
	I0612 21:44:06.953219   80157 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.953373   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:44:06.953943   80157 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:44:06.953986   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:44:06.954003   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:44:06.956587   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:44:06.957989   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:44:06.972666   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:44:07.000720   80157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:44:07.000822   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.000839   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-087875 minikube.k8s.io/updated_at=2024_06_12T21_44_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=no-preload-087875 minikube.k8s.io/primary=true
	I0612 21:44:07.201613   80157 ops.go:34] apiserver oom_adj: -16
	I0612 21:44:07.201713   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.702791   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.201886   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.702020   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.202755   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.702683   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.202007   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.702272   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.201764   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.702383   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.201880   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.702587   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.202524   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.702498   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.202157   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.702197   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.201852   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.702444   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.201919   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.701722   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.202307   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.701823   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.202602   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.702354   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.202207   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.308654   80157 kubeadm.go:1107] duration metric: took 12.307897648s to wait for elevateKubeSystemPrivileges
	W0612 21:44:19.308699   80157 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:44:19.308709   80157 kubeadm.go:393] duration metric: took 5m15.118303799s to StartCluster
	I0612 21:44:19.308738   80157 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.308825   80157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:44:19.311295   80157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.311587   80157 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:44:19.313263   80157 out.go:177] * Verifying Kubernetes components...
	I0612 21:44:19.311693   80157 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:44:19.311780   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:44:19.315137   80157 addons.go:69] Setting storage-provisioner=true in profile "no-preload-087875"
	I0612 21:44:19.315148   80157 addons.go:69] Setting default-storageclass=true in profile "no-preload-087875"
	I0612 21:44:19.315192   80157 addons.go:234] Setting addon storage-provisioner=true in "no-preload-087875"
	I0612 21:44:19.315201   80157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-087875"
	I0612 21:44:19.315202   80157 addons.go:69] Setting metrics-server=true in profile "no-preload-087875"
	I0612 21:44:19.315240   80157 addons.go:234] Setting addon metrics-server=true in "no-preload-087875"
	W0612 21:44:19.315255   80157 addons.go:243] addon metrics-server should already be in state true
	I0612 21:44:19.315296   80157 host.go:66] Checking if "no-preload-087875" exists ...
	W0612 21:44:19.315209   80157 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:44:19.315397   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.315139   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:44:19.315636   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315666   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315653   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315698   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315731   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315750   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.331461   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40419
	I0612 21:44:19.331495   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0612 21:44:19.331924   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332019   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332446   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332466   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332580   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332603   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332866   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.332911   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.333087   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.333484   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.333508   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.334462   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0612 21:44:19.334922   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.335447   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.335474   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.335812   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.336376   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.336408   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.336657   80157 addons.go:234] Setting addon default-storageclass=true in "no-preload-087875"
	W0612 21:44:19.336675   80157 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:44:19.336701   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.337047   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.337078   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.350724   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0612 21:44:19.351308   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.351869   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.351897   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.352272   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.352503   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.354434   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0612 21:44:19.354532   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.356594   80157 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:44:19.354927   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.355284   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37489
	I0612 21:44:19.357181   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.358026   80157 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.357219   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358040   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:44:19.358048   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.358058   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.358407   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.358560   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358577   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.359024   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.359035   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.359069   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.359408   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.361013   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.361524   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.363337   80157 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:44:19.361921   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.362312   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.364713   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:44:19.364727   80157 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:44:19.364736   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.364744   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.365021   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.365260   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.365419   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.368572   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.368971   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.368988   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.369144   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.369316   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.369431   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.369538   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.377220   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0612 21:44:19.377598   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.378595   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.378621   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.378931   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.379127   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.380646   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.380844   80157 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.380857   80157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:44:19.380869   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.383763   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384201   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.384216   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384504   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.384660   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.384816   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.384956   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.516231   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:44:19.539205   80157 node_ready.go:35] waiting up to 6m0s for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546948   80157 node_ready.go:49] node "no-preload-087875" has status "Ready":"True"
	I0612 21:44:19.546972   80157 node_ready.go:38] duration metric: took 7.739123ms for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546985   80157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:19.553454   80157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562831   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.562854   80157 pod_ready.go:81] duration metric: took 9.377758ms for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562862   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568274   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.568296   80157 pod_ready.go:81] duration metric: took 5.425162ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568306   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.572960   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.572991   80157 pod_ready.go:81] duration metric: took 4.669828ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.573002   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.620522   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:44:19.620548   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:44:19.654325   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.681762   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:44:19.681800   80157 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:44:19.699701   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.774496   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:19.774526   80157 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:44:19.874891   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:20.590260   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590292   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590276   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590360   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590587   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.590634   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.590644   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.590651   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590658   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592402   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592462   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592410   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592411   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592414   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592551   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592476   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.592655   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592952   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.593069   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.593093   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.634339   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.634370   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.634813   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.634864   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.634880   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.321337   80157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.446394551s)
	I0612 21:44:21.321389   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.321403   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.321802   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:21.321827   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.321968   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322012   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.322023   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.322278   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.322294   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322305   80157 addons.go:475] Verifying addon metrics-server=true in "no-preload-087875"
	I0612 21:44:21.324652   80157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:44:21.326653   80157 addons.go:510] duration metric: took 2.01495884s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:44:21.589251   80157 pod_ready.go:92] pod "kube-proxy-lnhzt" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.589290   80157 pod_ready.go:81] duration metric: took 2.016278458s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.589305   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652083   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.652122   80157 pod_ready.go:81] duration metric: took 62.805318ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652136   80157 pod_ready.go:38] duration metric: took 2.105136343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:21.652156   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:44:21.652237   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:44:21.683110   80157 api_server.go:72] duration metric: took 2.371482611s to wait for apiserver process to appear ...
	I0612 21:44:21.683148   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:44:21.683187   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:44:21.704637   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:44:21.714032   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:44:21.714061   80157 api_server.go:131] duration metric: took 30.904631ms to wait for apiserver health ...
	I0612 21:44:21.714070   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:44:21.751484   80157 system_pods.go:59] 9 kube-system pods found
	I0612 21:44:21.751520   80157 system_pods.go:61] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:21.751526   80157 system_pods.go:61] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:21.751532   80157 system_pods.go:61] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:21.751537   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:21.751544   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:21.751548   80157 system_pods.go:61] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:21.751552   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:21.751560   80157 system_pods.go:61] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:21.751568   80157 system_pods.go:61] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:21.751581   80157 system_pods.go:74] duration metric: took 37.503399ms to wait for pod list to return data ...
	I0612 21:44:21.751595   80157 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:44:21.943440   80157 default_sa.go:45] found service account: "default"
	I0612 21:44:21.943465   80157 default_sa.go:55] duration metric: took 191.863221ms for default service account to be created ...
	I0612 21:44:21.943473   80157 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:44:22.146922   80157 system_pods.go:86] 9 kube-system pods found
	I0612 21:44:22.146960   80157 system_pods.go:89] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:22.146969   80157 system_pods.go:89] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:22.146975   80157 system_pods.go:89] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:22.146982   80157 system_pods.go:89] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:22.146988   80157 system_pods.go:89] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:22.146994   80157 system_pods.go:89] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:22.147000   80157 system_pods.go:89] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:22.147012   80157 system_pods.go:89] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:22.147030   80157 system_pods.go:89] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:22.147042   80157 system_pods.go:126] duration metric: took 203.562938ms to wait for k8s-apps to be running ...
	I0612 21:44:22.147056   80157 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:44:22.147110   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:22.167568   80157 system_svc.go:56] duration metric: took 20.500218ms WaitForService to wait for kubelet
	I0612 21:44:22.167606   80157 kubeadm.go:576] duration metric: took 2.855984791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:44:22.167627   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:44:22.343015   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:44:22.343039   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:44:22.343051   80157 node_conditions.go:105] duration metric: took 175.419211ms to run NodePressure ...
	I0612 21:44:22.343064   80157 start.go:240] waiting for startup goroutines ...
	I0612 21:44:22.343073   80157 start.go:245] waiting for cluster config update ...
	I0612 21:44:22.343085   80157 start.go:254] writing updated cluster config ...
	I0612 21:44:22.343387   80157 ssh_runner.go:195] Run: rm -f paused
	I0612 21:44:22.391092   80157 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:44:22.393268   80157 out.go:177] * Done! kubectl is now configured to use "no-preload-087875" cluster and "default" namespace by default
	I0612 21:44:37.700712   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:44:37.700862   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:44:37.702455   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:44:37.702552   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:44:37.702639   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:44:37.702749   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:44:37.702887   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:44:37.702992   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:44:37.704955   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:44:37.705032   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:44:37.705088   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:44:37.705159   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:44:37.705228   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:44:37.705289   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:44:37.705368   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:44:37.705467   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:44:37.705538   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:44:37.705620   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:44:37.705683   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:44:37.705723   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:44:37.705773   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:44:37.705816   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:44:37.705861   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:44:37.705917   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:44:37.705964   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:44:37.706062   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:44:37.706172   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:44:37.706231   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:44:37.706288   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:44:37.707753   80762 out.go:204]   - Booting up control plane ...
	I0612 21:44:37.707857   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:44:37.707931   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:44:37.707994   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:44:37.708064   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:44:37.708197   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:44:37.708251   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:44:37.708344   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708536   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708600   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708770   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708864   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709067   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709133   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709340   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709441   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709638   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709650   80762 kubeadm.go:309] 
	I0612 21:44:37.709683   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:44:37.709721   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:44:37.709728   80762 kubeadm.go:309] 
	I0612 21:44:37.709777   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:44:37.709817   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:44:37.709910   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:44:37.709917   80762 kubeadm.go:309] 
	I0612 21:44:37.710018   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:44:37.710052   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:44:37.710083   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:44:37.710089   80762 kubeadm.go:309] 
	I0612 21:44:37.710184   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:44:37.710259   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:44:37.710265   80762 kubeadm.go:309] 
	I0612 21:44:37.710359   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:44:37.710431   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:44:37.710497   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:44:37.710563   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:44:37.710607   80762 kubeadm.go:309] 
	W0612 21:44:37.710666   80762 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0612 21:44:37.710709   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:44:38.170461   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:38.186842   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:44:38.198380   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:44:38.198400   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:44:38.198454   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:44:38.208876   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:44:38.208948   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:44:38.219641   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:44:38.229622   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:44:38.229685   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:44:38.240153   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.251342   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:44:38.251401   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.262662   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:44:38.272898   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:44:38.272954   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:44:38.283213   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:44:38.501637   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:46:34.582636   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:46:34.582745   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:46:34.584702   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:46:34.584775   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:46:34.584898   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:46:34.585029   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:46:34.585172   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:46:34.585263   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:46:34.587030   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:46:34.587101   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:46:34.587160   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:46:34.587260   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:46:34.587349   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:46:34.587446   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:46:34.587521   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:46:34.587609   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:46:34.587697   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:46:34.587803   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:46:34.587886   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:46:34.588014   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:46:34.588097   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:46:34.588177   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:46:34.588268   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:46:34.588381   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:46:34.588447   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:46:34.588558   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:46:34.588659   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:46:34.588719   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:46:34.588816   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:46:34.590114   80762 out.go:204]   - Booting up control plane ...
	I0612 21:46:34.590226   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:46:34.590326   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:46:34.590444   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:46:34.590527   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:46:34.590710   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:46:34.590778   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:46:34.590847   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591054   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591149   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591411   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591508   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591743   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591846   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592108   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592205   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592395   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592403   80762 kubeadm.go:309] 
	I0612 21:46:34.592436   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:46:34.592485   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:46:34.592500   80762 kubeadm.go:309] 
	I0612 21:46:34.592535   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:46:34.592563   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:46:34.592677   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:46:34.592688   80762 kubeadm.go:309] 
	I0612 21:46:34.592820   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:46:34.592855   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:46:34.592883   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:46:34.592890   80762 kubeadm.go:309] 
	I0612 21:46:34.593007   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:46:34.593107   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:46:34.593116   80762 kubeadm.go:309] 
	I0612 21:46:34.593224   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:46:34.593342   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:46:34.593426   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:46:34.593494   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:46:34.593552   80762 kubeadm.go:393] duration metric: took 8m2.356271864s to StartCluster
	I0612 21:46:34.593558   80762 kubeadm.go:309] 
	I0612 21:46:34.593589   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:46:34.593639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:46:34.643842   80762 cri.go:89] found id: ""
	I0612 21:46:34.643876   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.643887   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:46:34.643905   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:46:34.643982   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:46:34.682878   80762 cri.go:89] found id: ""
	I0612 21:46:34.682899   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.682906   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:46:34.682912   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:46:34.682961   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:46:34.721931   80762 cri.go:89] found id: ""
	I0612 21:46:34.721955   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.721964   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:46:34.721969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:46:34.722021   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:46:34.759233   80762 cri.go:89] found id: ""
	I0612 21:46:34.759266   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.759274   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:46:34.759280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:46:34.759333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:46:34.800142   80762 cri.go:89] found id: ""
	I0612 21:46:34.800176   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.800186   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:46:34.800194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:46:34.800256   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:46:34.836746   80762 cri.go:89] found id: ""
	I0612 21:46:34.836774   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.836784   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:46:34.836791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:46:34.836850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:46:34.876108   80762 cri.go:89] found id: ""
	I0612 21:46:34.876138   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.876147   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:46:34.876153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:46:34.876202   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:46:34.912272   80762 cri.go:89] found id: ""
	I0612 21:46:34.912294   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.912301   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:46:34.912310   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:46:34.912324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:46:34.997300   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:46:34.997331   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:46:34.997347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:46:35.105602   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:46:35.105638   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:46:35.152818   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:46:35.152857   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:46:35.216504   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:46:35.216545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0612 21:46:35.239531   80762 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0612 21:46:35.239581   80762 out.go:239] * 
	W0612 21:46:35.239646   80762 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.239672   80762 out.go:239] * 
	W0612 21:46:35.240600   80762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:46:35.244822   80762 out.go:177] 
	W0612 21:46:35.246072   80762 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.246137   80762 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0612 21:46:35.246164   80762 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0612 21:46:35.247768   80762 out.go:177] 
	
	
	==> CRI-O <==
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.075971511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718228797075941228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebaef637-fb7d-4c8f-93df-6e894402ee09 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.076483488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97f60444-d979-453e-8d90-93cdf8c9cfea name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.076593524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97f60444-d979-453e-8d90-93cdf8c9cfea name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.076626409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=97f60444-d979-453e-8d90-93cdf8c9cfea name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.109943549Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=defa28a8-f2d5-4686-b266-1e0493d6f2f9 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.110021319Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=defa28a8-f2d5-4686-b266-1e0493d6f2f9 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.112023530Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=243a2705-711f-4719-a676-4e450ebacb94 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.112385089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718228797112359301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=243a2705-711f-4719-a676-4e450ebacb94 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.112983651Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a4e0713-7dee-49f8-b410-0648d02ffedf name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.113030986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a4e0713-7dee-49f8-b410-0648d02ffedf name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.113060458Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7a4e0713-7dee-49f8-b410-0648d02ffedf name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.147555645Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f78213a7-d8ee-441c-af3a-8ae122eb34d7 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.147640006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f78213a7-d8ee-441c-af3a-8ae122eb34d7 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.148670052Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddc06bf4-2bb5-4709-b5af-5e332edc8e65 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.149059026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718228797149034501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddc06bf4-2bb5-4709-b5af-5e332edc8e65 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.149616897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdb7a6a1-daa6-4501-88fe-02a561a47a49 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.149670084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdb7a6a1-daa6-4501-88fe-02a561a47a49 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.149707141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fdb7a6a1-daa6-4501-88fe-02a561a47a49 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.182986985Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2690c20-f0d6-47f5-ad43-4bec067eb91a name=/runtime.v1.RuntimeService/Version
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.183054714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2690c20-f0d6-47f5-ad43-4bec067eb91a name=/runtime.v1.RuntimeService/Version
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.185662166Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa4ddeec-b31f-4e81-b401-cb3d96d8142a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.186138293Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718228797186116421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa4ddeec-b31f-4e81-b401-cb3d96d8142a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.186981320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b0efb80-e389-4faf-ba74-4931636fa234 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.187047972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b0efb80-e389-4faf-ba74-4931636fa234 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:46:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:46:37.187082611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8b0efb80-e389-4faf-ba74-4931636fa234 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun12 21:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056321] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044953] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.826136] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.486922] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.757887] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.131253] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.069367] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066150] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.207548] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.141383] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.298797] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +6.786115] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.069711] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.050220] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[ +13.489395] kauditd_printk_skb: 46 callbacks suppressed
	[Jun12 21:42] systemd-fstab-generator[5031]: Ignoring "noauto" option for root device
	[Jun12 21:44] systemd-fstab-generator[5305]: Ignoring "noauto" option for root device
	[  +0.065559] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:46:37 up 8 min,  0 users,  load average: 0.01, 0.08, 0.06
	Linux old-k8s-version-983302 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b48480, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bf9410, 0x24, 0x60, 0x7fb3dc070490, 0x118, ...)
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]: net/http.(*Transport).dial(0xc00063b7c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bf9410, 0x24, 0x0, 0x0, 0x0, ...)
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]: net/http.(*Transport).dialConn(0xc00063b7c0, 0x4f7fe00, 0xc000120018, 0x0, 0xc00033e600, 0x5, 0xc000bf9410, 0x24, 0x0, 0xc0008fb9e0, ...)
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]: net/http.(*Transport).dialConnFor(0xc00063b7c0, 0xc000bb0c60)
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]: created by net/http.(*Transport).queueForDial
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]: goroutine 166 [select]:
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c39260, 0xc000c68100, 0xc000c52960, 0xc000c52900)
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]: created by net.(*netFD).connect
	Jun 12 21:46:34 old-k8s-version-983302 kubelet[5484]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Jun 12 21:46:34 old-k8s-version-983302 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 12 21:46:34 old-k8s-version-983302 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 12 21:46:35 old-k8s-version-983302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jun 12 21:46:35 old-k8s-version-983302 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 12 21:46:35 old-k8s-version-983302 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 12 21:46:35 old-k8s-version-983302 kubelet[5544]: I0612 21:46:35.223896    5544 server.go:416] Version: v1.20.0
	Jun 12 21:46:35 old-k8s-version-983302 kubelet[5544]: I0612 21:46:35.224622    5544 server.go:837] Client rotation is on, will bootstrap in background
	Jun 12 21:46:35 old-k8s-version-983302 kubelet[5544]: I0612 21:46:35.226959    5544 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 12 21:46:35 old-k8s-version-983302 kubelet[5544]: I0612 21:46:35.228774    5544 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jun 12 21:46:35 old-k8s-version-983302 kubelet[5544]: W0612 21:46:35.229100    5544 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-983302 -n old-k8s-version-983302
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 2 (235.846825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-983302" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (765.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0612 21:42:29.497783   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:43:14.295010   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:43:14.422488   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-12 21:51:26.095960508 +0000 UTC m=+6037.710410886
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-376087 logs -n 25
E0612 21:51:26.516703   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-376087 logs -n 25: (2.087430227s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| delete  | -p bridge-701638                                       | bridge-701638                | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-576552 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | disable-driver-mounts-576552                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-087875             | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-376087  | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-591460            | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-983302        | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-087875                  | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-376087       | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:42 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-591460                 | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-983302             | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:33:52
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:33:52.855557   80762 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:33:52.855829   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.855839   80762 out.go:304] Setting ErrFile to fd 2...
	I0612 21:33:52.855845   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.856037   80762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:33:52.856582   80762 out.go:298] Setting JSON to false
	I0612 21:33:52.857472   80762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8178,"bootTime":1718219855,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:33:52.857527   80762 start.go:139] virtualization: kvm guest
	I0612 21:33:52.859369   80762 out.go:177] * [old-k8s-version-983302] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:33:52.860886   80762 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:33:52.860907   80762 notify.go:220] Checking for updates...
	I0612 21:33:52.862185   80762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:33:52.863642   80762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:33:52.865031   80762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:33:52.866306   80762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:33:52.867535   80762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:33:52.869148   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:33:52.869530   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.869597   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.884278   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0612 21:33:52.884743   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.885211   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.885234   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.885575   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.885768   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.887577   80762 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0612 21:33:52.888972   80762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:33:52.889265   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.889296   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.903649   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
	I0612 21:33:52.904087   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.904500   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.904518   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.904831   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.904988   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.939030   80762 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:33:52.940484   80762 start.go:297] selected driver: kvm2
	I0612 21:33:52.940497   80762 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.940622   80762 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:33:52.941314   80762 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.941389   80762 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:33:52.956273   80762 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:33:52.956646   80762 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:33:52.956674   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:33:52.956682   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:33:52.956715   80762 start.go:340] cluster config:
	{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.956828   80762 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.958634   80762 out.go:177] * Starting "old-k8s-version-983302" primary control-plane node in "old-k8s-version-983302" cluster
	I0612 21:33:52.959924   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:33:52.959963   80762 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0612 21:33:52.959970   80762 cache.go:56] Caching tarball of preloaded images
	I0612 21:33:52.960065   80762 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:33:52.960079   80762 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0612 21:33:52.960190   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:33:52.960397   80762 start.go:360] acquireMachinesLock for old-k8s-version-983302: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:33:57.423439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:00.495475   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:06.575478   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:09.647560   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:15.727510   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:18.799491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:24.879423   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:27.951495   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:34.031457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:37.103569   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:43.183470   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:46.255491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:52.335452   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:55.407544   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:01.487489   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:04.559546   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:10.639492   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:13.711372   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:19.791460   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:22.863455   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:28.943506   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:32.015443   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:38.095436   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:41.167526   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:47.247485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:50.319435   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:56.399471   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:59.471485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:05.551493   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:08.623467   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:14.703401   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:17.775479   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:23.855516   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:26.927418   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:33.007439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:36.079449   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:42.159480   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:45.231482   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:51.311424   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:54.383524   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:00.463466   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:03.535465   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:09.615457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:12.687462   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:18.767463   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:21.839431   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:24.843967   80243 start.go:364] duration metric: took 4m34.377488728s to acquireMachinesLock for "default-k8s-diff-port-376087"
	I0612 21:37:24.844034   80243 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:24.844046   80243 fix.go:54] fixHost starting: 
	I0612 21:37:24.844649   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:24.844689   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:24.859743   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0612 21:37:24.860227   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:24.860659   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:37:24.860680   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:24.861055   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:24.861352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:24.861550   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:37:24.863507   80243 fix.go:112] recreateIfNeeded on default-k8s-diff-port-376087: state=Stopped err=<nil>
	I0612 21:37:24.863538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	W0612 21:37:24.863708   80243 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:24.865564   80243 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-376087" ...
	I0612 21:37:24.866899   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Start
	I0612 21:37:24.867064   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring networks are active...
	I0612 21:37:24.867951   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network default is active
	I0612 21:37:24.868390   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network mk-default-k8s-diff-port-376087 is active
	I0612 21:37:24.868746   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Getting domain xml...
	I0612 21:37:24.869408   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Creating domain...
	I0612 21:37:24.841481   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:24.841529   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.841912   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:37:24.841938   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.842149   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:37:24.843818   80157 machine.go:97] duration metric: took 4m37.413209096s to provisionDockerMachine
	I0612 21:37:24.843853   80157 fix.go:56] duration metric: took 4m37.434262933s for fixHost
	I0612 21:37:24.843860   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 4m37.434303466s
	W0612 21:37:24.843897   80157 start.go:713] error starting host: provision: host is not running
	W0612 21:37:24.843971   80157 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0612 21:37:24.843980   80157 start.go:728] Will try again in 5 seconds ...
	I0612 21:37:26.077364   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting to get IP...
	I0612 21:37:26.078173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078646   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078686   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.078611   81491 retry.go:31] will retry after 224.429366ms: waiting for machine to come up
	I0612 21:37:26.305227   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305668   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305699   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.305627   81491 retry.go:31] will retry after 298.325251ms: waiting for machine to come up
	I0612 21:37:26.605155   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605587   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605622   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.605558   81491 retry.go:31] will retry after 327.789765ms: waiting for machine to come up
	I0612 21:37:26.935066   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935536   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935567   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.935477   81491 retry.go:31] will retry after 381.56012ms: waiting for machine to come up
	I0612 21:37:27.319036   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.319429   81491 retry.go:31] will retry after 474.663822ms: waiting for machine to come up
	I0612 21:37:27.796149   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796596   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796635   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.796564   81491 retry.go:31] will retry after 943.868595ms: waiting for machine to come up
	I0612 21:37:28.741715   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742226   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:28.742180   81491 retry.go:31] will retry after 1.014472282s: waiting for machine to come up
	I0612 21:37:29.758384   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758928   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758947   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:29.758867   81491 retry.go:31] will retry after 971.872729ms: waiting for machine to come up
	I0612 21:37:29.845647   80157 start.go:360] acquireMachinesLock for no-preload-087875: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:37:30.732362   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732794   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:30.732742   81491 retry.go:31] will retry after 1.352202491s: waiting for machine to come up
	I0612 21:37:32.087272   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087702   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087726   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:32.087663   81491 retry.go:31] will retry after 2.276552983s: waiting for machine to come up
	I0612 21:37:34.367159   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367579   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367613   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:34.367520   81491 retry.go:31] will retry after 1.785262755s: waiting for machine to come up
	I0612 21:37:36.154927   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155412   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:36.155357   81491 retry.go:31] will retry after 3.309693081s: waiting for machine to come up
	I0612 21:37:39.468800   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469443   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469469   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:39.469393   81491 retry.go:31] will retry after 4.284995408s: waiting for machine to come up
	I0612 21:37:45.096430   80404 start.go:364] duration metric: took 4m40.295909999s to acquireMachinesLock for "embed-certs-591460"
	I0612 21:37:45.096485   80404 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:45.096490   80404 fix.go:54] fixHost starting: 
	I0612 21:37:45.096932   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:45.096972   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:45.113819   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
	I0612 21:37:45.114290   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:45.114823   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:37:45.114843   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:45.115208   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:45.115415   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:37:45.115578   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:37:45.117131   80404 fix.go:112] recreateIfNeeded on embed-certs-591460: state=Stopped err=<nil>
	I0612 21:37:45.117156   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	W0612 21:37:45.117324   80404 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:45.119535   80404 out.go:177] * Restarting existing kvm2 VM for "embed-certs-591460" ...
	I0612 21:37:43.759195   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759548   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Found IP for machine: 192.168.61.80
	I0612 21:37:43.759575   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has current primary IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759583   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserving static IP address...
	I0612 21:37:43.760031   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserved static IP address: 192.168.61.80
	I0612 21:37:43.760063   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.760075   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for SSH to be available...
	I0612 21:37:43.760120   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | skip adding static IP to network mk-default-k8s-diff-port-376087 - found existing host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"}
	I0612 21:37:43.760134   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Getting to WaitForSSH function...
	I0612 21:37:43.762259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.762626   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762741   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH client type: external
	I0612 21:37:43.762771   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa (-rw-------)
	I0612 21:37:43.762804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:37:43.762842   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | About to run SSH command:
	I0612 21:37:43.762860   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | exit 0
	I0612 21:37:43.891446   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | SSH cmd err, output: <nil>: 
	I0612 21:37:43.891831   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetConfigRaw
	I0612 21:37:43.892485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:43.895220   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895625   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.895656   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895928   80243 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/config.json ...
	I0612 21:37:43.896140   80243 machine.go:94] provisionDockerMachine start ...
	I0612 21:37:43.896161   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:43.896388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:43.898898   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899317   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.899346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899539   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:43.899727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.899868   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.900019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:43.900171   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:43.900360   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:43.900371   80243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:37:44.016295   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:37:44.016327   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016577   80243 buildroot.go:166] provisioning hostname "default-k8s-diff-port-376087"
	I0612 21:37:44.016602   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.019396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019732   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.019763   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019881   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.020084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020214   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020418   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.020612   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.020803   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.020820   80243 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376087 && echo "default-k8s-diff-port-376087" | sudo tee /etc/hostname
	I0612 21:37:44.146019   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376087
	
	I0612 21:37:44.146049   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.148758   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149204   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.149238   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149356   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.149538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149873   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.150013   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.150187   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.150204   80243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376087/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:37:44.272821   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:44.272852   80243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:37:44.272887   80243 buildroot.go:174] setting up certificates
	I0612 21:37:44.272895   80243 provision.go:84] configureAuth start
	I0612 21:37:44.272903   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.273185   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:44.275991   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276337   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.276366   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276591   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.279011   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279370   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.279396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279521   80243 provision.go:143] copyHostCerts
	I0612 21:37:44.279576   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:37:44.279585   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:37:44.279649   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:37:44.279740   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:37:44.279748   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:37:44.279770   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:37:44.279828   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:37:44.279835   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:37:44.279855   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:37:44.279914   80243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376087 san=[127.0.0.1 192.168.61.80 default-k8s-diff-port-376087 localhost minikube]
	I0612 21:37:44.410909   80243 provision.go:177] copyRemoteCerts
	I0612 21:37:44.410974   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:37:44.410999   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.413740   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414140   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.414173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414406   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.414597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.414759   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.414904   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.501641   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:37:44.526082   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0612 21:37:44.549455   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:37:44.572447   80243 provision.go:87] duration metric: took 299.539656ms to configureAuth
	I0612 21:37:44.572473   80243 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:37:44.572632   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:37:44.572731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.575518   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.575913   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.575948   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.576170   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.576383   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576553   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576754   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.576913   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.577134   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.577155   80243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:37:44.851891   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:37:44.851922   80243 machine.go:97] duration metric: took 955.766062ms to provisionDockerMachine
	I0612 21:37:44.851936   80243 start.go:293] postStartSetup for "default-k8s-diff-port-376087" (driver="kvm2")
	I0612 21:37:44.851951   80243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:37:44.851970   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:44.852318   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:37:44.852352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.855231   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855556   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.855595   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.855935   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.856127   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.856260   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.941821   80243 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:37:44.946013   80243 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:37:44.946052   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:37:44.946120   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:37:44.946200   80243 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:37:44.946281   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:37:44.955467   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:44.979379   80243 start.go:296] duration metric: took 127.428385ms for postStartSetup
	I0612 21:37:44.979421   80243 fix.go:56] duration metric: took 20.135375416s for fixHost
	I0612 21:37:44.979445   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.981891   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.982287   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982520   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.982713   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.982920   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.983040   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.983220   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.983450   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.983467   80243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:37:45.096266   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228265.072559389
	
	I0612 21:37:45.096288   80243 fix.go:216] guest clock: 1718228265.072559389
	I0612 21:37:45.096295   80243 fix.go:229] Guest: 2024-06-12 21:37:45.072559389 +0000 UTC Remote: 2024-06-12 21:37:44.979426071 +0000 UTC m=+294.653210040 (delta=93.133318ms)
	I0612 21:37:45.096313   80243 fix.go:200] guest clock delta is within tolerance: 93.133318ms
	I0612 21:37:45.096318   80243 start.go:83] releasing machines lock for "default-k8s-diff-port-376087", held for 20.252307995s
	I0612 21:37:45.096346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.096683   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:45.099332   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099761   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.099805   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099902   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100560   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100841   80243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:37:45.100880   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.100981   80243 ssh_runner.go:195] Run: cat /version.json
	I0612 21:37:45.101007   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.103590   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.103774   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104052   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104186   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104202   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104210   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104417   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104430   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104650   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104651   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104837   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104852   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.104993   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.208199   80243 ssh_runner.go:195] Run: systemctl --version
	I0612 21:37:45.214375   80243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:37:45.370991   80243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:37:45.378676   80243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:37:45.378744   80243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:37:45.400622   80243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:37:45.400642   80243 start.go:494] detecting cgroup driver to use...
	I0612 21:37:45.400709   80243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:37:45.416775   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:37:45.430261   80243 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:37:45.430314   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:37:45.445482   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:37:45.461471   80243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:37:45.578411   80243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:37:45.750493   80243 docker.go:233] disabling docker service ...
	I0612 21:37:45.750556   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:37:45.769072   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:37:45.784755   80243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:37:45.907970   80243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:37:46.031847   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:37:46.046473   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:37:46.067764   80243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:37:46.067813   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.080604   80243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:37:46.080660   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.093611   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.104443   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.117070   80243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:37:46.128759   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.139977   80243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.157893   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.168896   80243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:37:46.179765   80243 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:37:46.179816   80243 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:37:46.194059   80243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:37:46.205474   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:46.322562   80243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:37:46.479073   80243 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:37:46.479149   80243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:37:46.484557   80243 start.go:562] Will wait 60s for crictl version
	I0612 21:37:46.484609   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:37:46.488403   80243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:37:46.529210   80243 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:37:46.529301   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.561476   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.594477   80243 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:37:45.120900   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Start
	I0612 21:37:45.121084   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring networks are active...
	I0612 21:37:45.121776   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network default is active
	I0612 21:37:45.122108   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network mk-embed-certs-591460 is active
	I0612 21:37:45.122554   80404 main.go:141] libmachine: (embed-certs-591460) Getting domain xml...
	I0612 21:37:45.123260   80404 main.go:141] libmachine: (embed-certs-591460) Creating domain...
	I0612 21:37:46.357867   80404 main.go:141] libmachine: (embed-certs-591460) Waiting to get IP...
	I0612 21:37:46.358704   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.359164   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.359265   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.359144   81627 retry.go:31] will retry after 278.948395ms: waiting for machine to come up
	I0612 21:37:46.639971   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.640491   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.640523   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.640433   81627 retry.go:31] will retry after 342.550517ms: waiting for machine to come up
	I0612 21:37:46.985065   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.985590   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.985618   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.985548   81627 retry.go:31] will retry after 297.683214ms: waiting for machine to come up
	I0612 21:37:47.285192   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.285650   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.285688   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.285615   81627 retry.go:31] will retry after 415.994572ms: waiting for machine to come up
	I0612 21:37:47.702894   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.703398   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.703424   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.703353   81627 retry.go:31] will retry after 672.441633ms: waiting for machine to come up
	I0612 21:37:48.377227   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:48.377772   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:48.377802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:48.377735   81627 retry.go:31] will retry after 790.165478ms: waiting for machine to come up
	I0612 21:37:49.169651   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:49.170194   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:49.170224   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:49.170134   81627 retry.go:31] will retry after 953.609739ms: waiting for machine to come up
	I0612 21:37:46.595772   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:46.599221   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599682   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:46.599712   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599919   80243 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0612 21:37:46.604573   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:46.617274   80243 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:37:46.617388   80243 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:37:46.617443   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:46.663227   80243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:37:46.663306   80243 ssh_runner.go:195] Run: which lz4
	I0612 21:37:46.667878   80243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:37:46.672384   80243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:37:46.672416   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:37:48.195844   80243 crio.go:462] duration metric: took 1.527996646s to copy over tarball
	I0612 21:37:48.195908   80243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:37:50.125800   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:50.126305   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:50.126337   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:50.126260   81627 retry.go:31] will retry after 938.251336ms: waiting for machine to come up
	I0612 21:37:51.065851   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:51.066225   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:51.066247   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:51.066194   81627 retry.go:31] will retry after 1.635454683s: waiting for machine to come up
	I0612 21:37:52.704193   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:52.704663   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:52.704687   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:52.704633   81627 retry.go:31] will retry after 1.56455027s: waiting for machine to come up
	I0612 21:37:54.271391   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:54.271873   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:54.271919   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:54.271826   81627 retry.go:31] will retry after 2.052574222s: waiting for machine to come up
	I0612 21:37:50.464553   80243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.268615304s)
	I0612 21:37:50.464601   80243 crio.go:469] duration metric: took 2.268715227s to extract the tarball
	I0612 21:37:50.464612   80243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:37:50.502406   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:50.550796   80243 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:37:50.550821   80243 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:37:50.550831   80243 kubeadm.go:928] updating node { 192.168.61.80 8444 v1.30.1 crio true true} ...
	I0612 21:37:50.550957   80243 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:37:50.551042   80243 ssh_runner.go:195] Run: crio config
	I0612 21:37:50.603232   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:50.603256   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:50.603268   80243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:37:50.603299   80243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.80 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376087 NodeName:default-k8s-diff-port-376087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:37:50.603459   80243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.80
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-376087"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:37:50.603524   80243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:37:50.614003   80243 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:37:50.614082   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:37:50.623416   80243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0612 21:37:50.640203   80243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:37:50.656668   80243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0612 21:37:50.674601   80243 ssh_runner.go:195] Run: grep 192.168.61.80	control-plane.minikube.internal$ /etc/hosts
	I0612 21:37:50.678858   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:50.692389   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:50.822225   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:37:50.840703   80243 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087 for IP: 192.168.61.80
	I0612 21:37:50.840734   80243 certs.go:194] generating shared ca certs ...
	I0612 21:37:50.840758   80243 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:37:50.840936   80243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:37:50.840986   80243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:37:50.840999   80243 certs.go:256] generating profile certs ...
	I0612 21:37:50.841133   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/client.key
	I0612 21:37:50.841200   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key.0afce446
	I0612 21:37:50.841238   80243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key
	I0612 21:37:50.841357   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:37:50.841398   80243 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:37:50.841409   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:37:50.841438   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:37:50.841469   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:37:50.841489   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:37:50.841529   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:50.842311   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:37:50.880075   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:37:50.914504   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:37:50.945724   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:37:50.975702   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0612 21:37:51.009817   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:37:51.039086   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:37:51.064146   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:37:51.088483   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:37:51.112785   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:37:51.136192   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:37:51.159239   80243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:37:51.175719   80243 ssh_runner.go:195] Run: openssl version
	I0612 21:37:51.181707   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:37:51.193498   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198415   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198475   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.204601   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:37:51.216354   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:37:51.231979   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.236952   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.237018   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.243461   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:37:51.258481   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:37:51.273412   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279356   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279420   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.285551   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:37:51.298066   80243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:37:51.302791   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:37:51.309402   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:37:51.316170   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:37:51.322785   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:37:51.329066   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:37:51.335031   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:37:51.340945   80243 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:37:51.341082   80243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:37:51.341143   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.383011   80243 cri.go:89] found id: ""
	I0612 21:37:51.383134   80243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:37:51.394768   80243 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:37:51.394794   80243 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:37:51.394800   80243 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:37:51.394852   80243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:37:51.408147   80243 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:37:51.409094   80243 kubeconfig.go:125] found "default-k8s-diff-port-376087" server: "https://192.168.61.80:8444"
	I0612 21:37:51.411221   80243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:37:51.421897   80243 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.80
	I0612 21:37:51.421934   80243 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:37:51.421949   80243 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:37:51.422029   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.470321   80243 cri.go:89] found id: ""
	I0612 21:37:51.470441   80243 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:37:51.488369   80243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:37:51.498367   80243 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:37:51.498388   80243 kubeadm.go:156] found existing configuration files:
	
	I0612 21:37:51.498449   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0612 21:37:51.510212   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:37:51.510287   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:37:51.520231   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0612 21:37:51.529270   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:37:51.529339   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:37:51.538902   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.548593   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:37:51.548652   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.558533   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0612 21:37:51.567995   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:37:51.568063   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:37:51.577695   80243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:37:51.587794   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:51.718155   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.602448   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.820456   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.901167   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.977502   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:37:52.977606   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.477802   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.977879   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.995753   80243 api_server.go:72] duration metric: took 1.018251882s to wait for apiserver process to appear ...
	I0612 21:37:53.995788   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:37:53.995812   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:53.996308   80243 api_server.go:269] stopped: https://192.168.61.80:8444/healthz: Get "https://192.168.61.80:8444/healthz": dial tcp 192.168.61.80:8444: connect: connection refused
	I0612 21:37:54.496045   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.293362   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:37:57.293394   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:37:57.293408   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.395854   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.395886   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.496122   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.505090   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.505124   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.996334   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.000606   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:58.000646   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:58.496177   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.504422   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:37:58.513123   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:37:58.513150   80243 api_server.go:131] duration metric: took 4.517354722s to wait for apiserver health ...
	I0612 21:37:58.513158   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:58.513163   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:58.514696   80243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:37:56.325937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:56.326316   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:56.326343   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:56.326261   81627 retry.go:31] will retry after 3.51636746s: waiting for machine to come up
	I0612 21:37:58.516091   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:37:58.541034   80243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:37:58.585635   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:37:58.596829   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:37:58.596859   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:37:58.596867   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:37:58.596873   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:37:58.596883   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:37:58.596888   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 21:37:58.596893   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:37:58.596899   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:37:58.596904   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:37:58.596910   80243 system_pods.go:74] duration metric: took 11.248328ms to wait for pod list to return data ...
	I0612 21:37:58.596917   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:37:58.600081   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:37:58.600107   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:37:58.600119   80243 node_conditions.go:105] duration metric: took 3.197181ms to run NodePressure ...
	I0612 21:37:58.600134   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:58.911963   80243 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918455   80243 kubeadm.go:733] kubelet initialised
	I0612 21:37:58.918475   80243 kubeadm.go:734] duration metric: took 6.490654ms waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918482   80243 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:37:58.924427   80243 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.930290   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930329   80243 pod_ready.go:81] duration metric: took 5.86525ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.930339   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930346   80243 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.935394   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935416   80243 pod_ready.go:81] duration metric: took 5.061639ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.935426   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935431   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.940238   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940268   80243 pod_ready.go:81] duration metric: took 4.829842ms for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.940286   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940295   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.989649   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989686   80243 pod_ready.go:81] duration metric: took 49.380431ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.989702   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989711   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.389868   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389903   80243 pod_ready.go:81] duration metric: took 400.174877ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.389912   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389918   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.790398   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790425   80243 pod_ready.go:81] duration metric: took 400.499157ms for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.790435   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790449   80243 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:00.189506   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189533   80243 pod_ready.go:81] duration metric: took 399.075983ms for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:00.189551   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189559   80243 pod_ready.go:38] duration metric: took 1.271068537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:00.189574   80243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:38:00.201480   80243 ops.go:34] apiserver oom_adj: -16
	I0612 21:38:00.201504   80243 kubeadm.go:591] duration metric: took 8.806697524s to restartPrimaryControlPlane
	I0612 21:38:00.201514   80243 kubeadm.go:393] duration metric: took 8.860579681s to StartCluster
	I0612 21:38:00.201536   80243 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.201601   80243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:00.203106   80243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.203416   80243 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:38:00.205568   80243 out.go:177] * Verifying Kubernetes components...
	I0612 21:38:00.203448   80243 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:38:00.203614   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:00.207110   80243 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207120   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:00.207120   80243 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207143   80243 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207166   80243 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207193   80243 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:38:00.207187   80243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376087"
	I0612 21:38:00.207208   80243 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207222   80243 addons.go:243] addon metrics-server should already be in state true
	I0612 21:38:00.207230   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207263   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207490   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207511   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207519   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207544   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207553   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207572   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.222521   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0612 21:38:00.222979   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.223496   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.223523   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.223899   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.224519   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.224555   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.227511   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0612 21:38:00.227543   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33041
	I0612 21:38:00.227874   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.227930   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.228402   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228409   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228426   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228471   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228776   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228780   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228952   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.229291   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.229323   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.232640   80243 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.232662   80243 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:38:00.232690   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.233072   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.233103   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.240883   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38355
	I0612 21:38:00.241363   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.241839   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.241861   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.242217   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.242434   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.244544   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.244604   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0612 21:38:00.246924   80243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:38:00.244915   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.248406   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:38:00.248430   80243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:38:00.248451   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.248861   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.248887   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.249211   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.249431   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.251070   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.251137   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0612 21:38:00.252729   80243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:00.251644   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.252033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.252601   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.254033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.254079   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.254111   80243 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.254127   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:38:00.254148   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.254211   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.254399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.254515   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.254542   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.254712   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.254926   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.256878   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.256948   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.257836   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258073   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.258105   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.258993   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.259141   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.259283   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.272822   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I0612 21:38:00.273238   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.273710   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.273734   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.274221   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.274411   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.276056   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.276286   80243 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.276302   80243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:38:00.276323   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.279285   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279351   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.279400   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.279675   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.279809   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.279940   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.392656   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:00.411972   80243 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:00.502108   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.504572   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:38:00.504590   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:38:00.522021   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:38:00.522057   80243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:38:00.538366   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.541981   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:00.541999   80243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:38:00.561335   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:01.519955   80243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017815416s)
	I0612 21:38:01.520006   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520087   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520100   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520312   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520334   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520343   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520350   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520422   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520435   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520444   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520452   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520554   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520573   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520647   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520678   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.520680   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.528807   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.528827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.529143   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.529162   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.529166   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556376   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.556701   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556750   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.556762   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.556780   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556791   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.557157   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.557179   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.557190   80243 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-376087"
	I0612 21:38:01.559103   80243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:37:59.844024   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:59.844481   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:59.844505   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:59.844433   81627 retry.go:31] will retry after 3.77902453s: waiting for machine to come up
	I0612 21:38:03.626861   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627380   80404 main.go:141] libmachine: (embed-certs-591460) Found IP for machine: 192.168.39.147
	I0612 21:38:03.627399   80404 main.go:141] libmachine: (embed-certs-591460) Reserving static IP address...
	I0612 21:38:03.627416   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has current primary IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627917   80404 main.go:141] libmachine: (embed-certs-591460) Reserved static IP address: 192.168.39.147
	I0612 21:38:03.627964   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.627981   80404 main.go:141] libmachine: (embed-certs-591460) Waiting for SSH to be available...
	I0612 21:38:03.628011   80404 main.go:141] libmachine: (embed-certs-591460) DBG | skip adding static IP to network mk-embed-certs-591460 - found existing host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"}
	I0612 21:38:03.628030   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Getting to WaitForSSH function...
	I0612 21:38:03.630082   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630480   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.630581   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630762   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH client type: external
	I0612 21:38:03.630802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa (-rw-------)
	I0612 21:38:03.630846   80404 main.go:141] libmachine: (embed-certs-591460) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:03.630872   80404 main.go:141] libmachine: (embed-certs-591460) DBG | About to run SSH command:
	I0612 21:38:03.630882   80404 main.go:141] libmachine: (embed-certs-591460) DBG | exit 0
	I0612 21:38:03.755304   80404 main.go:141] libmachine: (embed-certs-591460) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:03.755720   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetConfigRaw
	I0612 21:38:03.756310   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:03.758608   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.758927   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.758966   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.759153   80404 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/config.json ...
	I0612 21:38:03.759390   80404 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:03.759414   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:03.759641   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.761954   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762215   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.762244   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762371   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.762525   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762689   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762842   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.762995   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.763183   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.763206   80404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:03.867900   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:03.867936   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868185   80404 buildroot.go:166] provisioning hostname "embed-certs-591460"
	I0612 21:38:03.868210   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868430   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.871347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871690   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.871721   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871816   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.871977   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872130   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872258   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.872408   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.872588   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.872604   80404 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-591460 && echo "embed-certs-591460" | sudo tee /etc/hostname
	I0612 21:38:03.990526   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-591460
	
	I0612 21:38:03.990550   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.993057   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993458   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.993485   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993646   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.993830   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.993985   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.994125   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.994297   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.994499   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.994524   80404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-591460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-591460/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-591460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:04.120595   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:04.120623   80404 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:04.120640   80404 buildroot.go:174] setting up certificates
	I0612 21:38:04.120650   80404 provision.go:84] configureAuth start
	I0612 21:38:04.120658   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:04.120910   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.123483   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.123854   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.123879   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.124153   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.126901   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127293   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.127318   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127494   80404 provision.go:143] copyHostCerts
	I0612 21:38:04.127554   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:04.127566   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:04.127635   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:04.127736   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:04.127747   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:04.127785   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:04.127860   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:04.127870   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:04.127896   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:04.127960   80404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.embed-certs-591460 san=[127.0.0.1 192.168.39.147 embed-certs-591460 localhost minikube]
	I0612 21:38:04.265296   80404 provision.go:177] copyRemoteCerts
	I0612 21:38:04.265361   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:04.265392   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.267703   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268044   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.268090   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268244   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.268421   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.268583   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.268780   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.349440   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:04.374868   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0612 21:38:04.398419   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:04.423319   80404 provision.go:87] duration metric: took 302.657777ms to configureAuth
	I0612 21:38:04.423353   80404 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:04.423514   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:04.423586   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.426301   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426612   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.426641   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426796   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.426971   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427186   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427331   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.427553   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.427723   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.427739   80404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:04.689161   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:04.689199   80404 machine.go:97] duration metric: took 929.790838ms to provisionDockerMachine
	I0612 21:38:04.689212   80404 start.go:293] postStartSetup for "embed-certs-591460" (driver="kvm2")
	I0612 21:38:04.689223   80404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:04.689242   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.689569   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:04.689616   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.692484   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.692841   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.692864   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.693002   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.693191   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.693326   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.693469   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.923975   80762 start.go:364] duration metric: took 4m11.963543792s to acquireMachinesLock for "old-k8s-version-983302"
	I0612 21:38:04.924056   80762 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:04.924068   80762 fix.go:54] fixHost starting: 
	I0612 21:38:04.924507   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:04.924543   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:04.942022   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0612 21:38:04.942428   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:04.942891   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:38:04.942917   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:04.943345   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:04.943553   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:04.943726   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetState
	I0612 21:38:04.945403   80762 fix.go:112] recreateIfNeeded on old-k8s-version-983302: state=Stopped err=<nil>
	I0612 21:38:04.945427   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	W0612 21:38:04.945581   80762 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:04.947672   80762 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-983302" ...
	I0612 21:38:01.560387   80243 addons.go:510] duration metric: took 1.356939902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:38:02.416070   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.416451   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.774287   80404 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:04.778568   80404 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:04.778596   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:04.778667   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:04.778740   80404 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:04.778819   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:04.788602   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:04.813969   80404 start.go:296] duration metric: took 124.741162ms for postStartSetup
	I0612 21:38:04.814020   80404 fix.go:56] duration metric: took 19.717527303s for fixHost
	I0612 21:38:04.814049   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.816907   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817268   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.817294   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817511   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.817728   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.817905   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.818087   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.818293   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.818501   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.818516   80404 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:04.923846   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228284.879920542
	
	I0612 21:38:04.923868   80404 fix.go:216] guest clock: 1718228284.879920542
	I0612 21:38:04.923874   80404 fix.go:229] Guest: 2024-06-12 21:38:04.879920542 +0000 UTC Remote: 2024-06-12 21:38:04.814026698 +0000 UTC m=+300.152179547 (delta=65.893844ms)
	I0612 21:38:04.923890   80404 fix.go:200] guest clock delta is within tolerance: 65.893844ms
	I0612 21:38:04.923894   80404 start.go:83] releasing machines lock for "embed-certs-591460", held for 19.827427255s
	I0612 21:38:04.923920   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.924155   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.926708   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927102   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.927146   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927281   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927788   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927955   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.928043   80404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:04.928099   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.928158   80404 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:04.928182   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.930931   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931237   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931377   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931415   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931561   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931587   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931592   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931742   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931790   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.932111   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.932127   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.932250   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:05.009184   80404 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:05.035746   80404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:05.181527   80404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:05.189035   80404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:05.189113   80404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:05.205860   80404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:05.205886   80404 start.go:494] detecting cgroup driver to use...
	I0612 21:38:05.205957   80404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:05.223913   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:05.239598   80404 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:05.239679   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:05.253501   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:05.268094   80404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:05.397260   80404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:05.560454   80404 docker.go:233] disabling docker service ...
	I0612 21:38:05.560532   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:05.579197   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:05.593420   80404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:05.728145   80404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:05.860041   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:05.876025   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:05.895242   80404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:05.895336   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.906575   80404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:05.906662   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.918248   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.929178   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.942169   80404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:05.953542   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.969045   80404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.989509   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:06.001532   80404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:06.012676   80404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:06.012740   80404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:06.030028   80404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:06.048168   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:06.190039   80404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:06.349088   80404 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:06.349151   80404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:06.355251   80404 start.go:562] Will wait 60s for crictl version
	I0612 21:38:06.355321   80404 ssh_runner.go:195] Run: which crictl
	I0612 21:38:06.359456   80404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:06.400450   80404 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:06.400525   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.430078   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.461616   80404 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:04.949078   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .Start
	I0612 21:38:04.949226   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring networks are active...
	I0612 21:38:04.949936   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network default is active
	I0612 21:38:04.950371   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network mk-old-k8s-version-983302 is active
	I0612 21:38:04.950813   80762 main.go:141] libmachine: (old-k8s-version-983302) Getting domain xml...
	I0612 21:38:04.951549   80762 main.go:141] libmachine: (old-k8s-version-983302) Creating domain...
	I0612 21:38:06.296150   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting to get IP...
	I0612 21:38:06.296978   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.297465   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.297570   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.297453   81824 retry.go:31] will retry after 256.609938ms: waiting for machine to come up
	I0612 21:38:06.556307   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.556935   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.556967   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.556884   81824 retry.go:31] will retry after 285.754887ms: waiting for machine to come up
	I0612 21:38:06.844674   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.845227   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.845255   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.845171   81824 retry.go:31] will retry after 326.266367ms: waiting for machine to come up
	I0612 21:38:07.172788   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.173414   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.173447   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.173353   81824 retry.go:31] will retry after 393.443927ms: waiting for machine to come up
	I0612 21:38:07.568084   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.568645   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.568673   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.568609   81824 retry.go:31] will retry after 726.66775ms: waiting for machine to come up
	I0612 21:38:06.462860   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:06.466111   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466521   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:06.466551   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466837   80404 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:06.471361   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:06.485595   80404 kubeadm.go:877] updating cluster {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:06.485718   80404 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:06.485761   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:06.528708   80404 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:06.528778   80404 ssh_runner.go:195] Run: which lz4
	I0612 21:38:06.533340   80404 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:06.538076   80404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:06.538115   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:38:08.078495   80404 crio.go:462] duration metric: took 1.545201872s to copy over tarball
	I0612 21:38:08.078573   80404 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:06.917632   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:07.916734   80243 node_ready.go:49] node "default-k8s-diff-port-376087" has status "Ready":"True"
	I0612 21:38:07.916763   80243 node_ready.go:38] duration metric: took 7.504763576s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:07.916775   80243 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:07.924249   80243 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931751   80243 pod_ready.go:92] pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.931773   80243 pod_ready.go:81] duration metric: took 7.493608ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931782   80243 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937804   80243 pod_ready.go:92] pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.937880   80243 pod_ready.go:81] duration metric: took 6.090191ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937904   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:09.944927   80243 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:08.296811   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.297295   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.297319   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.297250   81824 retry.go:31] will retry after 658.540746ms: waiting for machine to come up
	I0612 21:38:08.957164   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.957611   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.957635   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.957576   81824 retry.go:31] will retry after 921.725713ms: waiting for machine to come up
	I0612 21:38:09.880881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:09.881672   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:09.881703   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:09.881604   81824 retry.go:31] will retry after 1.355846361s: waiting for machine to come up
	I0612 21:38:11.238616   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:11.239058   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:11.239094   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:11.238996   81824 retry.go:31] will retry after 1.3469357s: waiting for machine to come up
	I0612 21:38:12.587245   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:12.587747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:12.587785   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:12.587683   81824 retry.go:31] will retry after 1.616666063s: waiting for machine to come up
	I0612 21:38:10.426384   80404 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.347778968s)
	I0612 21:38:10.426418   80404 crio.go:469] duration metric: took 2.347893056s to extract the tarball
	I0612 21:38:10.426427   80404 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:10.472235   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:10.522846   80404 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:38:10.522869   80404 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:38:10.522876   80404 kubeadm.go:928] updating node { 192.168.39.147 8443 v1.30.1 crio true true} ...
	I0612 21:38:10.523007   80404 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-591460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:10.523163   80404 ssh_runner.go:195] Run: crio config
	I0612 21:38:10.577165   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:10.577193   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:10.577209   80404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:10.577244   80404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-591460 NodeName:embed-certs-591460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:38:10.577400   80404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-591460"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:10.577479   80404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:38:10.587499   80404 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:10.587573   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:10.597410   80404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0612 21:38:10.614617   80404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:10.632222   80404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0612 21:38:10.649693   80404 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:10.653639   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:10.666501   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:10.802679   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:10.820975   80404 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460 for IP: 192.168.39.147
	I0612 21:38:10.821001   80404 certs.go:194] generating shared ca certs ...
	I0612 21:38:10.821022   80404 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:10.821187   80404 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:10.821233   80404 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:10.821243   80404 certs.go:256] generating profile certs ...
	I0612 21:38:10.821326   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/client.key
	I0612 21:38:10.821402   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key.3b2e21e0
	I0612 21:38:10.821440   80404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key
	I0612 21:38:10.821575   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:10.821616   80404 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:10.821626   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:10.821655   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:10.821706   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:10.821751   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:10.821812   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:10.822621   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:10.879261   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:10.924352   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:10.961294   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:10.993792   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0612 21:38:11.039515   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:11.063161   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:11.086759   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:38:11.109693   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:11.133083   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:11.155716   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:11.181860   80404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:11.199989   80404 ssh_runner.go:195] Run: openssl version
	I0612 21:38:11.205811   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:11.216640   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221692   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221754   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.227829   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:11.239918   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:11.251648   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256123   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256176   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.261880   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:11.273184   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:11.284832   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289679   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289732   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.295338   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:11.306317   80404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:11.310737   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:11.320403   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:11.327756   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:11.333976   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:11.340200   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:11.346386   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:11.352268   80404 kubeadm.go:391] StartCluster: {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:11.352385   80404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:11.352435   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.390802   80404 cri.go:89] found id: ""
	I0612 21:38:11.390870   80404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:11.402604   80404 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:11.402626   80404 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:11.402630   80404 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:11.402682   80404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:11.413636   80404 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:11.414999   80404 kubeconfig.go:125] found "embed-certs-591460" server: "https://192.168.39.147:8443"
	I0612 21:38:11.417654   80404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:11.427456   80404 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.147
	I0612 21:38:11.427496   80404 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:11.427509   80404 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:11.427559   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.462135   80404 cri.go:89] found id: ""
	I0612 21:38:11.462211   80404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:11.478193   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:11.488816   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:11.488838   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:11.488899   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:11.498079   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:11.498154   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:11.508044   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:11.519721   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:11.519785   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:11.529554   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.538699   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:11.538750   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.548154   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:11.559980   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:11.560053   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:11.569737   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:11.579812   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:11.703454   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.773142   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069644541s)
	I0612 21:38:12.773183   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.991458   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.080268   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.207751   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:13.207934   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:13.708672   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.208389   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.268408   80404 api_server.go:72] duration metric: took 1.060631955s to wait for apiserver process to appear ...
	I0612 21:38:14.268443   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:38:14.268464   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:14.269096   80404 api_server.go:269] stopped: https://192.168.39.147:8443/healthz: Get "https://192.168.39.147:8443/healthz": dial tcp 192.168.39.147:8443: connect: connection refused
	I0612 21:38:10.445507   80243 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.445530   80243 pod_ready.go:81] duration metric: took 2.50760731s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.445542   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450290   80243 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.450310   80243 pod_ready.go:81] duration metric: took 4.759656ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450323   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454909   80243 pod_ready.go:92] pod "kube-proxy-8lrgv" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.454940   80243 pod_ready.go:81] duration metric: took 4.597123ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454951   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:12.587416   80243 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:13.505858   80243 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:13.505884   80243 pod_ready.go:81] duration metric: took 3.050925673s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:13.505896   80243 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:14.206281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:14.206781   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:14.206810   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:14.206716   81824 retry.go:31] will retry after 2.057638604s: waiting for machine to come up
	I0612 21:38:16.266372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:16.266920   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:16.266955   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:16.266858   81824 retry.go:31] will retry after 2.387834661s: waiting for machine to come up
	I0612 21:38:14.769114   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.056504   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.056539   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.056557   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.075356   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.075391   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.268731   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.277080   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.277111   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:17.768638   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.773438   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.773464   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:18.269037   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:18.273939   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:38:18.286895   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:38:18.286922   80404 api_server.go:131] duration metric: took 4.018473342s to wait for apiserver health ...
	I0612 21:38:18.286931   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:18.286937   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:18.288955   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:38:18.290619   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:38:18.305334   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:38:18.336590   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:38:18.351276   80404 system_pods.go:59] 8 kube-system pods found
	I0612 21:38:18.351320   80404 system_pods.go:61] "coredns-7db6d8ff4d-z99cq" [575689b8-3c51-45c8-874c-481e4b9db39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:38:18.351331   80404 system_pods.go:61] "etcd-embed-certs-591460" [190c1552-6bca-41f2-9ea9-e415e1ae9406] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:38:18.351342   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [c0fed28f-1d80-44eb-a66a-3a5b36704882] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:38:18.351350   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [79758f2a-2517-4a76-a3ae-536ac3adf781] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:38:18.351357   80404 system_pods.go:61] "kube-proxy-79kz5" [74ddb284-7cb2-46ec-ab9f-246dbfa0c4ec] Running
	I0612 21:38:18.351372   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [d9916521-fcc1-4bf1-8b03-8a5553f07bd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:38:18.351383   80404 system_pods.go:61] "metrics-server-569cc877fc-bkhxn" [f78482c8-82ea-4dbd-999f-2e4c73c98b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:38:18.351396   80404 system_pods.go:61] "storage-provisioner" [b3b117f7-ac44-4430-afb4-c6991ce1b71d] Running
	I0612 21:38:18.351407   80404 system_pods.go:74] duration metric: took 14.792966ms to wait for pod list to return data ...
	I0612 21:38:18.351419   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:38:18.357736   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:38:18.357769   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:38:18.357786   80404 node_conditions.go:105] duration metric: took 6.360028ms to run NodePressure ...
	I0612 21:38:18.357805   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:18.634312   80404 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638679   80404 kubeadm.go:733] kubelet initialised
	I0612 21:38:18.638700   80404 kubeadm.go:734] duration metric: took 4.362243ms waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638706   80404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:18.643840   80404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.648561   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648585   80404 pod_ready.go:81] duration metric: took 4.721795ms for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.648597   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648606   80404 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.654013   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654036   80404 pod_ready.go:81] duration metric: took 5.419602ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.654046   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654054   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.659445   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659468   80404 pod_ready.go:81] duration metric: took 5.404211ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.659479   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659487   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.741451   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741480   80404 pod_ready.go:81] duration metric: took 81.981354ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.741489   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741495   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140710   80404 pod_ready.go:92] pod "kube-proxy-79kz5" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:19.140734   80404 pod_ready.go:81] duration metric: took 399.230349ms for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140744   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:15.513300   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.013924   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:20.024841   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.656575   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:18.657074   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:18.657111   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:18.657022   81824 retry.go:31] will retry after 3.518256927s: waiting for machine to come up
	I0612 21:38:22.176416   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176901   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has current primary IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176930   80762 main.go:141] libmachine: (old-k8s-version-983302) Found IP for machine: 192.168.50.81
	I0612 21:38:22.176965   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserving static IP address...
	I0612 21:38:22.177385   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserved static IP address: 192.168.50.81
	I0612 21:38:22.177422   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.177435   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting for SSH to be available...
	I0612 21:38:22.177459   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | skip adding static IP to network mk-old-k8s-version-983302 - found existing host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"}
	I0612 21:38:22.177471   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Getting to WaitForSSH function...
	I0612 21:38:22.179728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180130   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.180158   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180273   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH client type: external
	I0612 21:38:22.180334   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa (-rw-------)
	I0612 21:38:22.180368   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:22.180387   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | About to run SSH command:
	I0612 21:38:22.180399   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | exit 0
	I0612 21:38:22.308621   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:22.308979   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetConfigRaw
	I0612 21:38:22.309620   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.312747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313124   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.313155   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313421   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:38:22.313635   80762 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:22.313658   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:22.313884   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.316476   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.316961   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.317014   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.317221   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.317408   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317600   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317775   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.317955   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.318195   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.318207   80762 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:22.431693   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:22.431728   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.431979   80762 buildroot.go:166] provisioning hostname "old-k8s-version-983302"
	I0612 21:38:22.432006   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.432191   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.434830   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435267   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.435300   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435431   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.435598   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435718   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435826   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.436056   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.436237   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.436252   80762 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-983302 && echo "old-k8s-version-983302" | sudo tee /etc/hostname
	I0612 21:38:22.563119   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-983302
	
	I0612 21:38:22.563184   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.565915   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.566315   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566513   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.566704   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.566885   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.567021   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.567243   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.567463   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.567490   80762 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-983302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-983302/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-983302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:22.690443   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:22.690474   80762 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:22.690494   80762 buildroot.go:174] setting up certificates
	I0612 21:38:22.690504   80762 provision.go:84] configureAuth start
	I0612 21:38:22.690514   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.690774   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.693186   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693528   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.693576   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693689   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.695948   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696285   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.696318   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696432   80762 provision.go:143] copyHostCerts
	I0612 21:38:22.696501   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:22.696521   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:22.696583   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:22.696662   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:22.696671   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:22.696693   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:22.696774   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:22.696784   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:22.696803   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:22.696847   80762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-983302 san=[127.0.0.1 192.168.50.81 localhost minikube old-k8s-version-983302]
	I0612 21:38:23.576378   80157 start.go:364] duration metric: took 53.730674695s to acquireMachinesLock for "no-preload-087875"
	I0612 21:38:23.576429   80157 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:23.576436   80157 fix.go:54] fixHost starting: 
	I0612 21:38:23.576844   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:23.576875   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:23.594879   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I0612 21:38:23.595284   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:23.595811   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:38:23.595836   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:23.596201   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:23.596404   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:23.596559   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:38:23.598372   80157 fix.go:112] recreateIfNeeded on no-preload-087875: state=Stopped err=<nil>
	I0612 21:38:23.598399   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	W0612 21:38:23.598558   80157 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:23.600649   80157 out.go:177] * Restarting existing kvm2 VM for "no-preload-087875" ...
	I0612 21:38:21.147354   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.147393   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:22.863618   80762 provision.go:177] copyRemoteCerts
	I0612 21:38:22.863672   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:22.863698   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.866979   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867371   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.867403   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867548   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.867734   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.867904   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.868126   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:22.958350   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:38:22.984409   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:23.009623   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0612 21:38:23.038026   80762 provision.go:87] duration metric: took 347.510898ms to configureAuth
	I0612 21:38:23.038063   80762 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:23.038309   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:38:23.038390   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.041196   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041634   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.041660   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041842   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.042044   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042222   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042410   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.042580   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.042780   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.042799   80762 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:23.324862   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:23.324893   80762 machine.go:97] duration metric: took 1.01124225s to provisionDockerMachine
	I0612 21:38:23.324904   80762 start.go:293] postStartSetup for "old-k8s-version-983302" (driver="kvm2")
	I0612 21:38:23.324913   80762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:23.324928   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.325240   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:23.325274   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.328007   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328343   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.328372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328578   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.328770   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.328939   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.329068   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.416040   80762 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:23.420586   80762 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:23.420607   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:23.420674   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:23.420739   80762 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:23.420823   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:23.432266   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:23.460619   80762 start.go:296] duration metric: took 135.703593ms for postStartSetup
	I0612 21:38:23.460661   80762 fix.go:56] duration metric: took 18.536593686s for fixHost
	I0612 21:38:23.460684   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.463415   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463745   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.463780   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463909   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.464110   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464248   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464378   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.464533   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.464742   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.464754   80762 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:23.576211   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228303.539451044
	
	I0612 21:38:23.576231   80762 fix.go:216] guest clock: 1718228303.539451044
	I0612 21:38:23.576239   80762 fix.go:229] Guest: 2024-06-12 21:38:23.539451044 +0000 UTC Remote: 2024-06-12 21:38:23.460665921 +0000 UTC m=+270.637213069 (delta=78.785123ms)
	I0612 21:38:23.576285   80762 fix.go:200] guest clock delta is within tolerance: 78.785123ms
	I0612 21:38:23.576291   80762 start.go:83] releasing machines lock for "old-k8s-version-983302", held for 18.65227368s
	I0612 21:38:23.576316   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.576617   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:23.579493   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.579881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.579913   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.580120   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580693   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580865   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580952   80762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:23.581005   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.581111   80762 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:23.581141   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.584053   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584262   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584443   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584479   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584587   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584690   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584757   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.584855   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584918   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.584980   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.585067   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.585115   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.585227   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.666055   80762 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:23.688409   80762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:23.848030   80762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:23.855302   80762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:23.855383   80762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:23.874362   80762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:23.874389   80762 start.go:494] detecting cgroup driver to use...
	I0612 21:38:23.874461   80762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:23.893239   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:23.909774   80762 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:23.909844   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:23.926084   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:23.943341   80762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:24.072731   80762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:24.244551   80762 docker.go:233] disabling docker service ...
	I0612 21:38:24.244624   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:24.261862   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:24.277051   80762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:24.426146   80762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:24.560634   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:24.575339   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:24.595965   80762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0612 21:38:24.596043   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.607814   80762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:24.607892   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.619001   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.630982   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.644326   80762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:24.658640   80762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:24.673944   80762 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:24.673994   80762 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:24.693853   80762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:24.709251   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:24.856222   80762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:25.023760   80762 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:25.023842   80762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:25.029449   80762 start.go:562] Will wait 60s for crictl version
	I0612 21:38:25.029522   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:25.033750   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:25.080911   80762 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:25.081018   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.111727   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.145999   80762 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0612 21:38:22.512748   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:24.515486   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.602119   80157 main.go:141] libmachine: (no-preload-087875) Calling .Start
	I0612 21:38:23.602319   80157 main.go:141] libmachine: (no-preload-087875) Ensuring networks are active...
	I0612 21:38:23.603167   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network default is active
	I0612 21:38:23.603533   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network mk-no-preload-087875 is active
	I0612 21:38:23.603887   80157 main.go:141] libmachine: (no-preload-087875) Getting domain xml...
	I0612 21:38:23.604617   80157 main.go:141] libmachine: (no-preload-087875) Creating domain...
	I0612 21:38:24.978550   80157 main.go:141] libmachine: (no-preload-087875) Waiting to get IP...
	I0612 21:38:24.979551   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:24.979945   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:24.980007   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:24.979925   81986 retry.go:31] will retry after 224.557195ms: waiting for machine to come up
	I0612 21:38:25.206441   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.206928   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.206957   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.206875   81986 retry.go:31] will retry after 361.682908ms: waiting for machine to come up
	I0612 21:38:25.570564   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.571139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.571184   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.571089   81986 retry.go:31] will retry after 328.335873ms: waiting for machine to come up
	I0612 21:38:25.901471   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.902020   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.902054   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.901953   81986 retry.go:31] will retry after 505.408325ms: waiting for machine to come up
	I0612 21:38:26.408636   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:26.409139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:26.409167   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:26.409091   81986 retry.go:31] will retry after 749.519426ms: waiting for machine to come up
	I0612 21:38:27.160100   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.160563   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.160611   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.160537   81986 retry.go:31] will retry after 641.037463ms: waiting for machine to come up
	I0612 21:38:25.147420   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:25.151029   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151402   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:25.151432   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151726   80762 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:25.156561   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:25.171243   80762 kubeadm.go:877] updating cluster {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:25.171386   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:38:25.171429   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:25.225872   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:25.225936   80762 ssh_runner.go:195] Run: which lz4
	I0612 21:38:25.230447   80762 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:25.235452   80762 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:25.235485   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0612 21:38:27.033962   80762 crio.go:462] duration metric: took 1.803565745s to copy over tarball
	I0612 21:38:27.034045   80762 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:25.149629   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.651785   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:26.516743   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:29.013751   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.803722   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.804278   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.804316   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.804252   81986 retry.go:31] will retry after 1.184505978s: waiting for machine to come up
	I0612 21:38:28.990221   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:28.990736   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:28.990763   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:28.990709   81986 retry.go:31] will retry after 1.061139219s: waiting for machine to come up
	I0612 21:38:30.054187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:30.054768   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:30.054805   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:30.054718   81986 retry.go:31] will retry after 1.621121981s: waiting for machine to come up
	I0612 21:38:31.677355   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:31.677938   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:31.677966   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:31.677890   81986 retry.go:31] will retry after 2.17746309s: waiting for machine to come up
	I0612 21:38:30.212028   80762 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177947965s)
	I0612 21:38:30.212073   80762 crio.go:469] duration metric: took 3.178080815s to extract the tarball
	I0612 21:38:30.212085   80762 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:30.256957   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:30.297891   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:30.297917   80762 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:30.298025   80762 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.298045   80762 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.298055   80762 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.298021   80762 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0612 21:38:30.298106   80762 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.298062   80762 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.298004   80762 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.298079   80762 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0612 21:38:30.299842   80762 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.299848   80762 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.299843   80762 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.299866   80762 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.299876   80762 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.299905   80762 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.466739   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0612 21:38:30.516078   80762 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0612 21:38:30.516127   80762 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0612 21:38:30.516174   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.520362   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0612 21:38:30.545437   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.563320   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0612 21:38:30.599110   80762 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0612 21:38:30.599155   80762 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.599217   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.603578   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.639450   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0612 21:38:30.649462   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.650602   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.652555   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.656970   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.672136   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.766185   80762 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0612 21:38:30.766233   80762 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.766279   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.778901   80762 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0612 21:38:30.778946   80762 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.778952   80762 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0612 21:38:30.778983   80762 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.778994   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.779041   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.793610   80762 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0612 21:38:30.793650   80762 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.793698   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.807451   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.807482   80762 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0612 21:38:30.807518   80762 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.807458   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.807518   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.807557   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.807559   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.916470   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0612 21:38:30.916564   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0612 21:38:30.916576   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0612 21:38:30.916603   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0612 21:38:30.916646   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.953152   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0612 21:38:31.194046   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:31.341827   80762 cache_images.go:92] duration metric: took 1.043891497s to LoadCachedImages
	W0612 21:38:31.341922   80762 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0612 21:38:31.341937   80762 kubeadm.go:928] updating node { 192.168.50.81 8443 v1.20.0 crio true true} ...
	I0612 21:38:31.342064   80762 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-983302 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:31.342154   80762 ssh_runner.go:195] Run: crio config
	I0612 21:38:31.395673   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:38:31.395706   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:31.395722   80762 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:31.395744   80762 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-983302 NodeName:old-k8s-version-983302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0612 21:38:31.395918   80762 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-983302"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:31.395995   80762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0612 21:38:31.410706   80762 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:31.410785   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:31.425161   80762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0612 21:38:31.445883   80762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:31.463605   80762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0612 21:38:31.482797   80762 ssh_runner.go:195] Run: grep 192.168.50.81	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:31.486974   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:31.499681   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:31.645490   80762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:31.668769   80762 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302 for IP: 192.168.50.81
	I0612 21:38:31.668797   80762 certs.go:194] generating shared ca certs ...
	I0612 21:38:31.668820   80762 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:31.668987   80762 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:31.669061   80762 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:31.669088   80762 certs.go:256] generating profile certs ...
	I0612 21:38:31.669212   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.key
	I0612 21:38:31.669309   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key.1098c83c
	I0612 21:38:31.669373   80762 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key
	I0612 21:38:31.669548   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:31.669598   80762 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:31.669613   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:31.669662   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:31.669723   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:31.669759   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:31.669830   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:31.670835   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:31.717330   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:31.754900   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:31.798099   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:31.839647   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0612 21:38:31.883454   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:31.920765   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:31.953069   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0612 21:38:31.978134   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:32.002475   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:32.027784   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:32.053563   80762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:32.074493   80762 ssh_runner.go:195] Run: openssl version
	I0612 21:38:32.080620   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:32.093531   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098615   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098688   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.104777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:32.116551   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:32.130188   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135197   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135279   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.142777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:32.156051   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:32.169866   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175249   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175340   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.181561   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:32.193430   80762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:32.198235   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:32.204654   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:32.210771   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:32.216966   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:32.223203   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:32.230990   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:32.237290   80762 kubeadm.go:391] StartCluster: {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:32.237446   80762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:32.237503   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.282436   80762 cri.go:89] found id: ""
	I0612 21:38:32.282516   80762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:32.295283   80762 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:32.295313   80762 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:32.295321   80762 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:32.295400   80762 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:32.307483   80762 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:32.308555   80762 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-983302" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:32.309335   80762 kubeconfig.go:62] /home/jenkins/minikube-integration/17779-14199/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-983302" cluster setting kubeconfig missing "old-k8s-version-983302" context setting]
	I0612 21:38:32.310486   80762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:32.397524   80762 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:32.411765   80762 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.81
	I0612 21:38:32.411797   80762 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:32.411807   80762 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:32.411849   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.460009   80762 cri.go:89] found id: ""
	I0612 21:38:32.460078   80762 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:32.481670   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:32.493664   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:32.493684   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:32.493734   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:32.503974   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:32.504044   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:32.515971   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:32.525772   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:32.525832   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:32.537137   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.548539   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:32.548600   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.560401   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:32.570608   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:32.570681   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:32.582763   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:32.594407   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:32.734633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:30.151681   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.658859   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:31.658881   80404 pod_ready.go:81] duration metric: took 12.518130926s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:31.658890   80404 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:33.666360   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.357093   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.857141   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:33.857675   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:33.857702   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:33.857648   81986 retry.go:31] will retry after 2.485654549s: waiting for machine to come up
	I0612 21:38:36.344611   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:36.345117   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:36.345148   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:36.345075   81986 retry.go:31] will retry after 3.560063035s: waiting for machine to come up
	I0612 21:38:33.526337   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.768139   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.896716   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.986708   80762 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:33.986832   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.487194   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.987580   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.486966   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.487534   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.987526   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:37.487035   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.669161   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:35.513787   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.011903   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:39.907588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:39.908051   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:39.908110   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:39.907994   81986 retry.go:31] will retry after 4.524521166s: waiting for machine to come up
	I0612 21:38:37.986904   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.487262   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.986907   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.486895   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.987060   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.487385   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.987049   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.487325   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.987550   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:42.487225   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.665078   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.665731   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.666653   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:40.512741   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.513175   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:45.013451   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.434330   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434850   80157 main.go:141] libmachine: (no-preload-087875) Found IP for machine: 192.168.72.63
	I0612 21:38:44.434883   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has current primary IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434893   80157 main.go:141] libmachine: (no-preload-087875) Reserving static IP address...
	I0612 21:38:44.435324   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.435358   80157 main.go:141] libmachine: (no-preload-087875) Reserved static IP address: 192.168.72.63
	I0612 21:38:44.435378   80157 main.go:141] libmachine: (no-preload-087875) DBG | skip adding static IP to network mk-no-preload-087875 - found existing host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"}
	I0612 21:38:44.435388   80157 main.go:141] libmachine: (no-preload-087875) Waiting for SSH to be available...
	I0612 21:38:44.435397   80157 main.go:141] libmachine: (no-preload-087875) DBG | Getting to WaitForSSH function...
	I0612 21:38:44.437881   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438196   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.438218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438385   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH client type: external
	I0612 21:38:44.438414   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa (-rw-------)
	I0612 21:38:44.438452   80157 main.go:141] libmachine: (no-preload-087875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:44.438469   80157 main.go:141] libmachine: (no-preload-087875) DBG | About to run SSH command:
	I0612 21:38:44.438489   80157 main.go:141] libmachine: (no-preload-087875) DBG | exit 0
	I0612 21:38:44.571149   80157 main.go:141] libmachine: (no-preload-087875) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:44.571499   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetConfigRaw
	I0612 21:38:44.572172   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.574754   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575142   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.575187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575406   80157 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/config.json ...
	I0612 21:38:44.575580   80157 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:44.575595   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:44.575825   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.578584   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579008   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.579030   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579214   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.579394   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579534   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.579924   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.580096   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.580109   80157 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:44.691573   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:44.691609   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.691890   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:38:44.691914   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.692120   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.695218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695697   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.695729   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695783   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.695986   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696200   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696383   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.696572   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.696776   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.696794   80157 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-087875 && echo "no-preload-087875" | sudo tee /etc/hostname
	I0612 21:38:44.821857   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-087875
	
	I0612 21:38:44.821893   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.824821   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825263   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.825295   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825523   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.825740   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.825912   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.826024   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.826187   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.826406   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.826430   80157 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-087875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-087875/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-087875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:44.948871   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:44.948904   80157 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:44.948930   80157 buildroot.go:174] setting up certificates
	I0612 21:38:44.948941   80157 provision.go:84] configureAuth start
	I0612 21:38:44.948954   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.949247   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.952166   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952511   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.952538   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952662   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.955149   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955483   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.955505   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955658   80157 provision.go:143] copyHostCerts
	I0612 21:38:44.955731   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:44.955743   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:44.955807   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:44.955929   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:44.955942   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:44.955975   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:44.956052   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:44.956059   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:44.956078   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:44.956125   80157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.no-preload-087875 san=[127.0.0.1 192.168.72.63 localhost minikube no-preload-087875]
	I0612 21:38:45.138701   80157 provision.go:177] copyRemoteCerts
	I0612 21:38:45.138758   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:45.138781   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.141540   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142011   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.142055   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142199   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.142457   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.142603   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.142765   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.234480   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:45.259043   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0612 21:38:45.290511   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:45.316377   80157 provision.go:87] duration metric: took 367.423709ms to configureAuth
	I0612 21:38:45.316403   80157 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:45.316607   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:45.316684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.319596   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320160   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.320187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320384   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.320598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320778   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320973   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.321203   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.321368   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.321387   80157 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:45.611478   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:45.611511   80157 machine.go:97] duration metric: took 1.035919707s to provisionDockerMachine
	I0612 21:38:45.611523   80157 start.go:293] postStartSetup for "no-preload-087875" (driver="kvm2")
	I0612 21:38:45.611533   80157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:45.611556   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.611843   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:45.611862   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.615071   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615542   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.615582   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615715   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.615889   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.616028   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.616204   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.707710   80157 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:45.712155   80157 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:45.712177   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:45.712235   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:45.712301   80157 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:45.712386   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:45.722654   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:45.747626   80157 start.go:296] duration metric: took 136.091584ms for postStartSetup
	I0612 21:38:45.747666   80157 fix.go:56] duration metric: took 22.171227252s for fixHost
	I0612 21:38:45.747685   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.750588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.750972   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.750999   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.751231   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.751443   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751773   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.752005   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.752181   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.752195   80157 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:45.864042   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228325.837473906
	
	I0612 21:38:45.864068   80157 fix.go:216] guest clock: 1718228325.837473906
	I0612 21:38:45.864079   80157 fix.go:229] Guest: 2024-06-12 21:38:45.837473906 +0000 UTC Remote: 2024-06-12 21:38:45.747669277 +0000 UTC m=+358.493088442 (delta=89.804629ms)
	I0612 21:38:45.864106   80157 fix.go:200] guest clock delta is within tolerance: 89.804629ms
	I0612 21:38:45.864114   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 22.287706082s
	I0612 21:38:45.864152   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.864448   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:45.867230   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867603   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.867633   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867768   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868293   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868453   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868535   80157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:45.868575   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.868663   80157 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:45.868681   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.871218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871489   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871678   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.871719   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871915   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872061   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.872085   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872109   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.872240   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872246   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872522   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872529   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.872692   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872868   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.953249   80157 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:45.976778   80157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:46.124511   80157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:46.130509   80157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:46.130575   80157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:46.149670   80157 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:46.149691   80157 start.go:494] detecting cgroup driver to use...
	I0612 21:38:46.149755   80157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:46.167865   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:46.182896   80157 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:46.182951   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:46.197058   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:46.211517   80157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:46.331986   80157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:46.500675   80157 docker.go:233] disabling docker service ...
	I0612 21:38:46.500745   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:46.516858   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:46.530617   80157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:46.674917   80157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:46.810090   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:46.825079   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:46.843895   80157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:46.843963   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.854170   80157 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:46.854245   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.864699   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.875057   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.886063   80157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:46.897688   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.908984   80157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.926803   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.939373   80157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:46.948868   80157 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:46.948922   80157 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:46.963593   80157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:46.973735   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:47.108669   80157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:47.249938   80157 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:47.250044   80157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:47.255480   80157 start.go:562] Will wait 60s for crictl version
	I0612 21:38:47.255556   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.259730   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:47.303074   80157 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:47.303187   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.332225   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.363628   80157 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:42.987579   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.487465   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.987265   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.487935   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.987399   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.487793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.986898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.486985   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.986848   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.486947   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.164573   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.165711   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.512195   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.512366   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.365068   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:47.367703   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368079   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:47.368103   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368325   80157 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:47.372608   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:47.386411   80157 kubeadm.go:877] updating cluster {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:47.386750   80157 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:47.386796   80157 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:47.422165   80157 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:47.422189   80157 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:47.422227   80157 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.422280   80157 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.422355   80157 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.422370   80157 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.422311   80157 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.422347   80157 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.422318   80157 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0612 21:38:47.422599   80157 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.423599   80157 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0612 21:38:47.423610   80157 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.423612   80157 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.423630   80157 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.423626   80157 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.423699   80157 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.423737   80157 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.423720   80157 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.556807   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0612 21:38:47.557424   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.561887   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.569402   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.571880   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.576879   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.587848   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.759890   80157 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0612 21:38:47.759926   80157 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.759947   80157 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0612 21:38:47.759973   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.759976   80157 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0612 21:38:47.760006   80157 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.760015   80157 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0612 21:38:47.759977   80157 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.760061   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760063   80157 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.760075   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760073   80157 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0612 21:38:47.760091   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760101   80157 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.760164   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.766878   80157 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0612 21:38:47.766905   80157 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.766943   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.777168   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.777197   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.778459   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.779057   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.882668   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0612 21:38:47.882770   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.902416   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0612 21:38:47.902532   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:47.917388   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0612 21:38:47.917473   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0612 21:38:47.917501   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:47.917528   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:47.917545   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:47.917500   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0612 21:38:47.917558   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917594   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917502   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:47.917559   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0612 21:38:47.929251   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0612 21:38:47.929299   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0612 21:38:47.929308   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0612 21:38:48.312589   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.713720   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1: (2.796151375s)
	I0612 21:38:50.713767   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.796263274s)
	I0612 21:38:50.713901   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.401254109s)
	I0612 21:38:50.713921   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.713966   80157 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0612 21:38:50.713987   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.714017   80157 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.714063   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.987863   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.487299   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.986886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.486972   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.987859   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.487034   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.987724   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.486948   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:52.487668   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.665638   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.665855   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:51.512765   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:54.011870   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.169682   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.455668553s)
	I0612 21:38:53.169705   80157 ssh_runner.go:235] Completed: which crictl: (2.455619981s)
	I0612 21:38:53.169714   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0612 21:38:53.169741   80157 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.169759   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:53.169784   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.216895   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0612 21:38:53.217020   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:38:57.220343   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.050521066s)
	I0612 21:38:57.220376   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0612 21:38:57.220397   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220444   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220443   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (4.003396955s)
	I0612 21:38:57.220487   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0612 21:38:52.987635   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.487500   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.987860   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.487855   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.986868   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.487259   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.987902   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.487535   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.987269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:57.487542   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.166299   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.665085   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:56.012847   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.557142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.682288   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.46182102s)
	I0612 21:38:58.682313   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0612 21:38:58.682337   80157 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:58.682376   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:39:00.576373   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.893964365s)
	I0612 21:39:00.576412   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0612 21:39:00.576443   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:39:00.576504   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:57.987222   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.486976   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.986913   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.487269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.987289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.987690   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.487283   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.987541   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:02.487589   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.667732   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.165317   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:01.012684   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.015111   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:02.445930   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.86940281s)
	I0612 21:39:02.445960   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0612 21:39:02.445994   80157 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:02.446071   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:03.393330   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0612 21:39:03.393375   80157 cache_images.go:123] Successfully loaded all cached images
	I0612 21:39:03.393382   80157 cache_images.go:92] duration metric: took 15.9711807s to LoadCachedImages
	I0612 21:39:03.393397   80157 kubeadm.go:928] updating node { 192.168.72.63 8443 v1.30.1 crio true true} ...
	I0612 21:39:03.393543   80157 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-087875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:39:03.393658   80157 ssh_runner.go:195] Run: crio config
	I0612 21:39:03.448859   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:03.448884   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:03.448901   80157 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:39:03.448930   80157 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.63 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-087875 NodeName:no-preload-087875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:39:03.449103   80157 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-087875"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:39:03.449181   80157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:39:03.462756   80157 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:39:03.462825   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:39:03.472653   80157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0612 21:39:03.491567   80157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:39:03.509239   80157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0612 21:39:03.527802   80157 ssh_runner.go:195] Run: grep 192.168.72.63	control-plane.minikube.internal$ /etc/hosts
	I0612 21:39:03.531523   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:39:03.543748   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:39:03.666376   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:39:03.683563   80157 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875 for IP: 192.168.72.63
	I0612 21:39:03.683587   80157 certs.go:194] generating shared ca certs ...
	I0612 21:39:03.683606   80157 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:39:03.683766   80157 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:39:03.683816   80157 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:39:03.683831   80157 certs.go:256] generating profile certs ...
	I0612 21:39:03.683927   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/client.key
	I0612 21:39:03.684010   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key.13709275
	I0612 21:39:03.684066   80157 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key
	I0612 21:39:03.684217   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:39:03.684259   80157 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:39:03.684272   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:39:03.684318   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:39:03.684364   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:39:03.684395   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:39:03.684455   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:39:03.685098   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:39:03.732817   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:39:03.771449   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:39:03.800774   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:39:03.831845   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 21:39:03.862000   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 21:39:03.901036   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:39:03.925025   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:39:03.950862   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:39:03.974222   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:39:04.002698   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:39:04.028173   80157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:39:04.044685   80157 ssh_runner.go:195] Run: openssl version
	I0612 21:39:04.050600   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:39:04.061893   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066371   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066424   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.072463   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:39:04.083929   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:39:04.094777   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099380   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099435   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.105125   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:39:04.116191   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:39:04.127408   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132234   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132315   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.138401   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:39:04.149542   80157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:39:04.154133   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:39:04.160171   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:39:04.166410   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:39:04.172650   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:39:04.178506   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:39:04.184375   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:39:04.190412   80157 kubeadm.go:391] StartCluster: {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:39:04.190524   80157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:39:04.190584   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.235297   80157 cri.go:89] found id: ""
	I0612 21:39:04.235362   80157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:39:04.246400   80157 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:39:04.246429   80157 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:39:04.246449   80157 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:39:04.246499   80157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:39:04.257137   80157 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:39:04.258277   80157 kubeconfig.go:125] found "no-preload-087875" server: "https://192.168.72.63:8443"
	I0612 21:39:04.260656   80157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:39:04.270637   80157 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.63
	I0612 21:39:04.270666   80157 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:39:04.270675   80157 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:39:04.270730   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.316487   80157 cri.go:89] found id: ""
	I0612 21:39:04.316550   80157 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:39:04.334814   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:39:04.346430   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:39:04.346451   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:39:04.346500   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:39:04.356362   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:39:04.356417   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:39:04.366999   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:39:04.378005   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:39:04.378061   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:39:04.388052   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.397130   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:39:04.397185   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.407053   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:39:04.416338   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:39:04.416395   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:39:04.426475   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:39:04.436852   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:04.565452   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.461610   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.676493   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.767236   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.870855   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:39:05.870960   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.372034   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.871680   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.906242   80157 api_server.go:72] duration metric: took 1.035387498s to wait for apiserver process to appear ...
	I0612 21:39:06.906273   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:39:06.906296   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:06.906883   80157 api_server.go:269] stopped: https://192.168.72.63:8443/healthz: Get "https://192.168.72.63:8443/healthz": dial tcp 192.168.72.63:8443: connect: connection refused
	I0612 21:39:02.987853   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.487382   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.987303   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.487852   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.987464   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.486928   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.987660   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:07.487497   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.166502   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.665452   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:09.665766   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:05.512792   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:08.012392   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:10.014073   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.407227   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.589285   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:39:09.589319   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:39:09.589336   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.726716   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.726753   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:09.907032   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.917718   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.917746   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.406997   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.412127   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:10.412156   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.906700   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.911262   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:39:10.918778   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:39:10.918813   80157 api_server.go:131] duration metric: took 4.012531107s to wait for apiserver health ...
	I0612 21:39:10.918824   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:10.918832   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:10.921012   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:39:10.922401   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:39:10.948209   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:39:10.974530   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:39:10.986054   80157 system_pods.go:59] 8 kube-system pods found
	I0612 21:39:10.986091   80157 system_pods.go:61] "coredns-7db6d8ff4d-sh68b" [17691219-bfda-443b-8049-e6e966aadb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:39:10.986102   80157 system_pods.go:61] "etcd-no-preload-087875" [3048b12a-4354-45fd-99c7-d2a84035e102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:39:10.986114   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [0f39a5fd-1a64-479f-bb28-c19bc10b7ed3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:39:10.986127   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [62cc49b8-b05f-4371-aa17-bea17d08d2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:39:10.986141   80157 system_pods.go:61] "kube-proxy-htv9h" [e3eb4693-7896-4dd2-98b8-91f06b028a1e] Running
	I0612 21:39:10.986158   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [ef833b9d-75ca-43bd-b196-30594775b174] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:39:10.986170   80157 system_pods.go:61] "metrics-server-569cc877fc-d5mj6" [79ba2aad-c942-4162-b69a-5c7dd138a618] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:39:10.986178   80157 system_pods.go:61] "storage-provisioner" [5793c778-1a5c-4cfe-924a-b85b72df53cd] Running
	I0612 21:39:10.986187   80157 system_pods.go:74] duration metric: took 11.634011ms to wait for pod list to return data ...
	I0612 21:39:10.986199   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:39:10.992801   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:39:10.992843   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:39:10.992856   80157 node_conditions.go:105] duration metric: took 6.648025ms to run NodePressure ...
	I0612 21:39:10.992878   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:11.263413   80157 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271758   80157 kubeadm.go:733] kubelet initialised
	I0612 21:39:11.271781   80157 kubeadm.go:734] duration metric: took 8.347232ms waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271789   80157 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:39:11.277940   80157 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:07.987732   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.486974   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.486941   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.986929   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.487754   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.987685   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.486910   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.987457   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.486873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.165604   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.166986   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.029928   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.512085   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:13.287555   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:15.786345   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.987394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.486915   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.987880   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.486881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.986951   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.487462   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.986850   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.487213   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.987066   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:17.487882   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.666123   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.666354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:16.512936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:19.013463   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.285110   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:20.788396   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.284869   80157 pod_ready.go:92] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:21.284902   80157 pod_ready.go:81] duration metric: took 10.006929439s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:21.284916   80157 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:17.987273   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.987836   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.487622   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.987381   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.487005   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.987638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.487670   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.987552   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:22.487438   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.665272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.512836   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:24.014108   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.291502   80157 pod_ready.go:102] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:25.791813   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.791842   80157 pod_ready.go:81] duration metric: took 4.506916362s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.791854   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796901   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.796928   80157 pod_ready.go:81] duration metric: took 5.066599ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796939   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801550   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.801571   80157 pod_ready.go:81] duration metric: took 4.624771ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801580   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806178   80157 pod_ready.go:92] pod "kube-proxy-htv9h" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.806195   80157 pod_ready.go:81] duration metric: took 4.609956ms for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806204   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809883   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.809902   80157 pod_ready.go:81] duration metric: took 3.691999ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809914   80157 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:22.987165   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.487122   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.987804   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.487583   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.987647   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.487126   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.987251   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.987044   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:27.486911   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.668272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:28.164809   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:26.513220   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:29.013047   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.817352   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:30.315600   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.487496   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.987166   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.487892   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.987787   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.487315   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.987933   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.487255   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:32.487881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.165900   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.167795   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.665939   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:31.013473   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:33.015281   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.316680   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.317063   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:36.816905   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.987267   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.487678   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.987296   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:33.987371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:34.028670   80762 cri.go:89] found id: ""
	I0612 21:39:34.028699   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.028710   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:34.028717   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:34.028778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:34.068371   80762 cri.go:89] found id: ""
	I0612 21:39:34.068400   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.068412   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:34.068419   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:34.068485   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:34.104605   80762 cri.go:89] found id: ""
	I0612 21:39:34.104634   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.104643   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:34.104650   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:34.104745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:34.150301   80762 cri.go:89] found id: ""
	I0612 21:39:34.150327   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.150335   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:34.150341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:34.150396   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:34.191426   80762 cri.go:89] found id: ""
	I0612 21:39:34.191462   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.191475   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:34.191484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:34.191562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:34.228483   80762 cri.go:89] found id: ""
	I0612 21:39:34.228523   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.228535   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:34.228543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:34.228653   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:34.262834   80762 cri.go:89] found id: ""
	I0612 21:39:34.262863   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.262873   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:34.262881   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:34.262944   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:34.298283   80762 cri.go:89] found id: ""
	I0612 21:39:34.298312   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.298321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:34.298330   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:34.298340   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:34.350889   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:34.350918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:34.365264   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:34.365289   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:34.508130   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:34.508162   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:34.508180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:34.572036   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:34.572076   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.114371   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:37.127410   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:37.127492   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:37.168684   80762 cri.go:89] found id: ""
	I0612 21:39:37.168705   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.168714   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:37.168723   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:37.168798   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:37.208765   80762 cri.go:89] found id: ""
	I0612 21:39:37.208797   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.208808   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:37.208815   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:37.208875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:37.266245   80762 cri.go:89] found id: ""
	I0612 21:39:37.266270   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.266277   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:37.266283   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:37.266331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:37.313557   80762 cri.go:89] found id: ""
	I0612 21:39:37.313586   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.313597   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:37.313606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:37.313677   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:37.353292   80762 cri.go:89] found id: ""
	I0612 21:39:37.353318   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.353325   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:37.353332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:37.353389   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:37.391940   80762 cri.go:89] found id: ""
	I0612 21:39:37.391974   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.391984   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:37.392015   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:37.392078   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:37.432133   80762 cri.go:89] found id: ""
	I0612 21:39:37.432154   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.432166   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:37.432174   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:37.432228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:37.468274   80762 cri.go:89] found id: ""
	I0612 21:39:37.468302   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.468310   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:37.468328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:37.468347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:37.543904   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:37.543941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.586957   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:37.586982   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:37.641247   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:37.641288   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:37.657076   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:37.657101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:37.729279   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:37.165427   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.166383   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:35.512174   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:37.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.012806   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.317119   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:41.817268   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.229638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:40.243825   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:40.243889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:40.282795   80762 cri.go:89] found id: ""
	I0612 21:39:40.282821   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.282829   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:40.282834   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:40.282879   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:40.320211   80762 cri.go:89] found id: ""
	I0612 21:39:40.320236   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.320246   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:40.320252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:40.320338   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:40.356270   80762 cri.go:89] found id: ""
	I0612 21:39:40.356292   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.356300   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:40.356306   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:40.356353   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:40.394667   80762 cri.go:89] found id: ""
	I0612 21:39:40.394691   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.394699   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:40.394704   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:40.394751   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:40.432765   80762 cri.go:89] found id: ""
	I0612 21:39:40.432794   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.432804   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:40.432811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:40.432883   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:40.472347   80762 cri.go:89] found id: ""
	I0612 21:39:40.472386   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.472406   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:40.472414   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:40.472477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:40.508414   80762 cri.go:89] found id: ""
	I0612 21:39:40.508445   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.508456   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:40.508464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:40.508521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:40.546938   80762 cri.go:89] found id: ""
	I0612 21:39:40.546964   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.546972   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:40.546981   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:40.546993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:40.621356   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:40.621380   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:40.621398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:40.703830   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:40.703865   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:40.744915   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:40.744965   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:40.798883   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:40.798920   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:41.167469   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.667403   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:42.512351   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.512639   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.317053   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.317350   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.315905   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:43.330150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:43.330221   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:43.377307   80762 cri.go:89] found id: ""
	I0612 21:39:43.377337   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.377347   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:43.377362   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:43.377426   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:43.412608   80762 cri.go:89] found id: ""
	I0612 21:39:43.412638   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.412648   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:43.412654   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:43.412718   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:43.446716   80762 cri.go:89] found id: ""
	I0612 21:39:43.446746   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.446755   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:43.446762   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:43.446823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:43.484607   80762 cri.go:89] found id: ""
	I0612 21:39:43.484636   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.484647   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:43.484655   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:43.484700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:43.522400   80762 cri.go:89] found id: ""
	I0612 21:39:43.522427   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.522438   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:43.522445   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:43.522529   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:43.559121   80762 cri.go:89] found id: ""
	I0612 21:39:43.559147   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.559163   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:43.559211   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:43.559292   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:43.595886   80762 cri.go:89] found id: ""
	I0612 21:39:43.595919   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.595937   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:43.595945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:43.596011   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:43.638549   80762 cri.go:89] found id: ""
	I0612 21:39:43.638573   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.638583   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:43.638594   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:43.638609   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:43.705300   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:43.705338   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:43.723246   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:43.723281   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:43.807735   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:43.807760   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:43.807870   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:43.882971   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:43.883017   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.421476   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:46.434447   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:46.434532   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:46.470710   80762 cri.go:89] found id: ""
	I0612 21:39:46.470745   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.470758   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:46.470765   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:46.470828   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:46.504843   80762 cri.go:89] found id: ""
	I0612 21:39:46.504871   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.504878   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:46.504884   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:46.504941   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:46.542937   80762 cri.go:89] found id: ""
	I0612 21:39:46.542965   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.542973   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:46.542979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:46.543035   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:46.581098   80762 cri.go:89] found id: ""
	I0612 21:39:46.581124   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.581133   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:46.581143   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:46.581189   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:46.617289   80762 cri.go:89] found id: ""
	I0612 21:39:46.617319   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.617329   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:46.617337   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:46.617402   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:46.651012   80762 cri.go:89] found id: ""
	I0612 21:39:46.651045   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.651057   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:46.651070   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:46.651141   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:46.688344   80762 cri.go:89] found id: ""
	I0612 21:39:46.688370   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.688379   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:46.688388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:46.688451   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:46.724349   80762 cri.go:89] found id: ""
	I0612 21:39:46.724374   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.724382   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:46.724390   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:46.724404   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:46.797866   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:46.797894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:46.797912   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:46.887520   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:46.887557   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.928143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:46.928182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:46.981416   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:46.981451   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:46.164845   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.166925   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.513519   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.016041   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.816335   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:50.816407   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.497028   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:49.510077   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:49.510147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:49.544313   80762 cri.go:89] found id: ""
	I0612 21:39:49.544349   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.544359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:49.544365   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:49.544416   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:49.580220   80762 cri.go:89] found id: ""
	I0612 21:39:49.580248   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.580256   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:49.580262   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:49.580316   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:49.619582   80762 cri.go:89] found id: ""
	I0612 21:39:49.619607   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.619615   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:49.619620   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:49.619692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:49.656453   80762 cri.go:89] found id: ""
	I0612 21:39:49.656479   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.656487   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:49.656493   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:49.656557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:49.694285   80762 cri.go:89] found id: ""
	I0612 21:39:49.694318   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.694330   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:49.694338   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:49.694417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:49.731100   80762 cri.go:89] found id: ""
	I0612 21:39:49.731127   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.731135   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:49.731140   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:49.731209   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:49.767709   80762 cri.go:89] found id: ""
	I0612 21:39:49.767731   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.767738   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:49.767744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:49.767787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:49.801231   80762 cri.go:89] found id: ""
	I0612 21:39:49.801265   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.801283   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:49.801294   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:49.801309   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:49.848500   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:49.848542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:49.900084   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:49.900121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:49.916208   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:49.916234   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:49.983283   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:49.983310   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:49.983325   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:52.566884   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:52.580400   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:52.580476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:52.615922   80762 cri.go:89] found id: ""
	I0612 21:39:52.615957   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.615970   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:52.615978   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:52.616038   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:52.657316   80762 cri.go:89] found id: ""
	I0612 21:39:52.657348   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.657356   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:52.657362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:52.657417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:52.692426   80762 cri.go:89] found id: ""
	I0612 21:39:52.692459   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.692470   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:52.692478   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:52.692542   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:52.726800   80762 cri.go:89] found id: ""
	I0612 21:39:52.726835   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.726848   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:52.726856   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:52.726921   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:52.764283   80762 cri.go:89] found id: ""
	I0612 21:39:52.764314   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.764326   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:52.764341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:52.764395   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:52.802279   80762 cri.go:89] found id: ""
	I0612 21:39:52.802311   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.802324   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:52.802331   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:52.802385   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:52.841433   80762 cri.go:89] found id: ""
	I0612 21:39:52.841466   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.841477   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:52.841484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:52.841546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:50.667322   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.165294   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:51.016137   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.019373   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.818876   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.316845   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.881417   80762 cri.go:89] found id: ""
	I0612 21:39:52.881441   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.881449   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:52.881457   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:52.881468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:52.936228   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:52.936262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:52.950688   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:52.950718   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:53.025101   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:53.025122   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:53.025138   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:53.114986   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:53.115031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:55.653893   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:55.668983   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:55.669047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:55.708445   80762 cri.go:89] found id: ""
	I0612 21:39:55.708475   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.708486   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:55.708494   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:55.708558   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:55.745158   80762 cri.go:89] found id: ""
	I0612 21:39:55.745185   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.745195   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:55.745204   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:55.745270   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:55.785322   80762 cri.go:89] found id: ""
	I0612 21:39:55.785344   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.785363   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:55.785370   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:55.785442   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:55.822371   80762 cri.go:89] found id: ""
	I0612 21:39:55.822397   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.822408   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:55.822416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:55.822484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:55.856866   80762 cri.go:89] found id: ""
	I0612 21:39:55.856888   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.856895   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:55.856900   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:55.856954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:55.891618   80762 cri.go:89] found id: ""
	I0612 21:39:55.891648   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.891660   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:55.891668   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:55.891731   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:55.927483   80762 cri.go:89] found id: ""
	I0612 21:39:55.927504   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.927513   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:55.927519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:55.927572   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:55.963546   80762 cri.go:89] found id: ""
	I0612 21:39:55.963572   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.963584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:55.963597   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:55.963616   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:56.037421   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:56.037442   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:56.037453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:56.112148   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:56.112185   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:56.163359   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:56.163389   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:56.217109   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:56.217144   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:55.166499   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.665517   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.665625   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.513267   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.015558   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.317149   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.320306   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:01.815855   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.733278   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:58.746890   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:58.746951   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:58.785222   80762 cri.go:89] found id: ""
	I0612 21:39:58.785252   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.785263   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:58.785269   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:58.785343   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:58.824421   80762 cri.go:89] found id: ""
	I0612 21:39:58.824448   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.824455   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:58.824461   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:58.824521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:58.863626   80762 cri.go:89] found id: ""
	I0612 21:39:58.863658   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.863669   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:58.863728   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:58.863818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:58.904040   80762 cri.go:89] found id: ""
	I0612 21:39:58.904064   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.904073   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:58.904080   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:58.904147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:58.937508   80762 cri.go:89] found id: ""
	I0612 21:39:58.937543   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.937557   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:58.937565   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:58.937632   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:58.974283   80762 cri.go:89] found id: ""
	I0612 21:39:58.974311   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.974322   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:58.974330   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:58.974383   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:59.009954   80762 cri.go:89] found id: ""
	I0612 21:39:59.009987   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.009999   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:59.010007   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:59.010072   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:59.051911   80762 cri.go:89] found id: ""
	I0612 21:39:59.051935   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.051943   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:59.051951   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:59.051961   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:59.102911   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:59.102942   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:59.116576   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:59.116608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:59.189590   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:59.189619   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:59.189634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:59.270192   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:59.270232   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.820872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:01.834916   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:01.835000   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:01.870526   80762 cri.go:89] found id: ""
	I0612 21:40:01.870560   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.870572   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:01.870579   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:01.870642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:01.909581   80762 cri.go:89] found id: ""
	I0612 21:40:01.909614   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.909626   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:01.909633   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:01.909727   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:01.947944   80762 cri.go:89] found id: ""
	I0612 21:40:01.947976   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.947988   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:01.947995   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:01.948059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:01.985745   80762 cri.go:89] found id: ""
	I0612 21:40:01.985781   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.985793   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:01.985800   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:01.985860   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:02.023716   80762 cri.go:89] found id: ""
	I0612 21:40:02.023741   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.023749   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:02.023754   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:02.023801   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:02.059136   80762 cri.go:89] found id: ""
	I0612 21:40:02.059168   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.059203   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:02.059212   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:02.059283   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:02.104520   80762 cri.go:89] found id: ""
	I0612 21:40:02.104544   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.104552   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:02.104558   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:02.104618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:02.146130   80762 cri.go:89] found id: ""
	I0612 21:40:02.146164   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.146176   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:02.146187   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:02.146202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:02.199672   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:02.199710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:02.215224   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:02.215256   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:02.290030   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:02.290057   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:02.290072   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:02.374579   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:02.374615   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.667390   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.165253   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:00.512229   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:02.513298   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.018848   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:03.816610   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.818990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.915345   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:04.928323   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:04.928404   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:04.963267   80762 cri.go:89] found id: ""
	I0612 21:40:04.963297   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.963310   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:04.963319   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:04.963386   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:04.998378   80762 cri.go:89] found id: ""
	I0612 21:40:04.998409   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.998420   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:04.998426   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:04.998498   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:05.038094   80762 cri.go:89] found id: ""
	I0612 21:40:05.038118   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.038126   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:05.038132   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:05.038181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:05.074331   80762 cri.go:89] found id: ""
	I0612 21:40:05.074366   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.074379   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:05.074386   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:05.074462   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:05.109332   80762 cri.go:89] found id: ""
	I0612 21:40:05.109359   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.109368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:05.109373   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:05.109423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:05.143875   80762 cri.go:89] found id: ""
	I0612 21:40:05.143908   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.143918   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:05.143926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:05.143990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:05.183695   80762 cri.go:89] found id: ""
	I0612 21:40:05.183724   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.183731   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:05.183737   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:05.183792   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:05.222852   80762 cri.go:89] found id: ""
	I0612 21:40:05.222878   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.222887   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:05.222895   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:05.222907   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:05.262661   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:05.262687   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:05.315563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:05.315593   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:05.332128   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:05.332163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:05.411675   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:05.411699   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:05.411712   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:06.665324   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.667163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.512587   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.012843   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.316990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.816093   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.991930   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:08.005743   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:08.005807   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:08.041685   80762 cri.go:89] found id: ""
	I0612 21:40:08.041714   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.041724   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:08.041732   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:08.041791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:08.080875   80762 cri.go:89] found id: ""
	I0612 21:40:08.080905   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.080916   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:08.080925   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:08.080993   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:08.117290   80762 cri.go:89] found id: ""
	I0612 21:40:08.117316   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.117323   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:08.117329   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:08.117387   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:08.154345   80762 cri.go:89] found id: ""
	I0612 21:40:08.154376   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.154387   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:08.154395   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:08.154459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:08.192913   80762 cri.go:89] found id: ""
	I0612 21:40:08.192947   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.192957   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:08.192969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:08.193033   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:08.235732   80762 cri.go:89] found id: ""
	I0612 21:40:08.235764   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.235775   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:08.235782   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:08.235853   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:08.274282   80762 cri.go:89] found id: ""
	I0612 21:40:08.274306   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.274314   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:08.274320   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:08.274366   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:08.314585   80762 cri.go:89] found id: ""
	I0612 21:40:08.314608   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.314619   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:08.314628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:08.314641   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:08.331693   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:08.331725   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:08.414541   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:08.414565   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:08.414584   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:08.496428   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:08.496460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:08.546991   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:08.547020   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.099778   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:11.113450   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:11.113539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:11.150426   80762 cri.go:89] found id: ""
	I0612 21:40:11.150451   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.150459   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:11.150464   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:11.150524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:11.189931   80762 cri.go:89] found id: ""
	I0612 21:40:11.189958   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.189967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:11.189972   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:11.190031   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:11.228116   80762 cri.go:89] found id: ""
	I0612 21:40:11.228144   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.228154   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:11.228161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:11.228243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:11.268639   80762 cri.go:89] found id: ""
	I0612 21:40:11.268664   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.268672   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:11.268678   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:11.268723   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:11.306077   80762 cri.go:89] found id: ""
	I0612 21:40:11.306105   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.306116   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:11.306123   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:11.306187   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:11.344360   80762 cri.go:89] found id: ""
	I0612 21:40:11.344388   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.344399   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:11.344418   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:11.344475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:11.382906   80762 cri.go:89] found id: ""
	I0612 21:40:11.382937   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.382948   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:11.382957   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:11.383027   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:11.418388   80762 cri.go:89] found id: ""
	I0612 21:40:11.418419   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.418429   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:11.418439   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:11.418453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:11.432204   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:11.432241   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:11.508219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:11.508251   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:11.508263   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:11.593021   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:11.593058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:11.634056   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:11.634087   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.165384   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:13.170153   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.013303   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.013454   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.817129   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:15.316929   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.187831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:14.203153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:14.203248   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:14.239693   80762 cri.go:89] found id: ""
	I0612 21:40:14.239716   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.239723   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:14.239729   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:14.239827   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:14.273206   80762 cri.go:89] found id: ""
	I0612 21:40:14.273234   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.273244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:14.273251   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:14.273313   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:14.315512   80762 cri.go:89] found id: ""
	I0612 21:40:14.315592   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.315610   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:14.315618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:14.315679   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:14.352454   80762 cri.go:89] found id: ""
	I0612 21:40:14.352483   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.352496   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:14.352504   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:14.352554   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:14.387845   80762 cri.go:89] found id: ""
	I0612 21:40:14.387872   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.387880   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:14.387886   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:14.387935   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:14.423220   80762 cri.go:89] found id: ""
	I0612 21:40:14.423245   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.423254   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:14.423259   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:14.423322   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:14.457744   80762 cri.go:89] found id: ""
	I0612 21:40:14.457772   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.457784   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:14.457791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:14.457849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:14.493580   80762 cri.go:89] found id: ""
	I0612 21:40:14.493611   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.493622   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:14.493633   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:14.493669   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:14.566867   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:14.566894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:14.566913   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:14.645916   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:14.645959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:14.690232   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:14.690262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:14.741532   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:14.741576   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.257886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:17.271841   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:17.271910   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:17.309628   80762 cri.go:89] found id: ""
	I0612 21:40:17.309654   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.309667   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:17.309675   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:17.309746   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:17.346671   80762 cri.go:89] found id: ""
	I0612 21:40:17.346752   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.346769   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:17.346777   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:17.346842   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:17.381145   80762 cri.go:89] found id: ""
	I0612 21:40:17.381169   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.381177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:17.381184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:17.381241   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:17.417159   80762 cri.go:89] found id: ""
	I0612 21:40:17.417179   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.417187   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:17.417194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:17.417254   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:17.453189   80762 cri.go:89] found id: ""
	I0612 21:40:17.453213   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.453220   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:17.453226   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:17.453284   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:17.510988   80762 cri.go:89] found id: ""
	I0612 21:40:17.511012   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.511019   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:17.511026   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:17.511083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:17.548141   80762 cri.go:89] found id: ""
	I0612 21:40:17.548166   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.548176   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:17.548182   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:17.548243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:17.584591   80762 cri.go:89] found id: ""
	I0612 21:40:17.584619   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.584627   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:17.584637   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:17.584647   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:17.628627   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:17.628662   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:17.682792   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:17.682823   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.697921   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:17.697959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:17.770591   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:17.770617   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:17.770633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:15.665831   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.165059   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:16.014130   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:17.817443   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.316576   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.350181   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:20.363671   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:20.363743   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:20.399858   80762 cri.go:89] found id: ""
	I0612 21:40:20.399889   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.399896   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:20.399903   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:20.399963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:20.437715   80762 cri.go:89] found id: ""
	I0612 21:40:20.437755   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.437766   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:20.437776   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:20.437843   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:20.472525   80762 cri.go:89] found id: ""
	I0612 21:40:20.472558   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.472573   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:20.472582   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:20.472642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:20.507923   80762 cri.go:89] found id: ""
	I0612 21:40:20.507948   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.507959   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:20.507966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:20.508029   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:20.545471   80762 cri.go:89] found id: ""
	I0612 21:40:20.545502   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.545512   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:20.545519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:20.545586   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:20.583793   80762 cri.go:89] found id: ""
	I0612 21:40:20.583829   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.583839   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:20.583846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:20.583912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:20.624399   80762 cri.go:89] found id: ""
	I0612 21:40:20.624438   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.624449   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:20.624467   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:20.624530   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:20.665158   80762 cri.go:89] found id: ""
	I0612 21:40:20.665184   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.665194   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:20.665203   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:20.665217   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:20.743062   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:20.743101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:20.792573   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:20.792613   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:20.847998   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:20.848033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:20.863447   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:20.863497   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:20.938020   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:20.165455   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.665110   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.665262   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.513556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.014750   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.316950   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.815377   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:26.817066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.438289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:23.453792   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:23.453855   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:23.494044   80762 cri.go:89] found id: ""
	I0612 21:40:23.494070   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.494077   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:23.494083   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:23.494144   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:23.533278   80762 cri.go:89] found id: ""
	I0612 21:40:23.533305   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.533313   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:23.533319   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:23.533380   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:23.568504   80762 cri.go:89] found id: ""
	I0612 21:40:23.568538   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.568549   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:23.568556   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:23.568619   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:23.610596   80762 cri.go:89] found id: ""
	I0612 21:40:23.610624   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.610633   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:23.610638   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:23.610690   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:23.651856   80762 cri.go:89] found id: ""
	I0612 21:40:23.651886   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.651896   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:23.651903   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:23.651978   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:23.690989   80762 cri.go:89] found id: ""
	I0612 21:40:23.691020   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.691030   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:23.691036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:23.691089   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:23.730417   80762 cri.go:89] found id: ""
	I0612 21:40:23.730454   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.730467   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:23.730476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:23.730538   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:23.773887   80762 cri.go:89] found id: ""
	I0612 21:40:23.773913   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.773921   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:23.773932   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:23.773947   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:23.825771   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:23.825805   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:23.840136   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:23.840163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:23.933645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:23.933670   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:23.933686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:24.020205   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:24.020243   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.566746   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:26.579557   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:26.579612   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:26.614721   80762 cri.go:89] found id: ""
	I0612 21:40:26.614749   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.614757   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:26.614763   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:26.614815   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:26.651398   80762 cri.go:89] found id: ""
	I0612 21:40:26.651427   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.651437   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:26.651445   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:26.651506   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:26.688217   80762 cri.go:89] found id: ""
	I0612 21:40:26.688249   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.688261   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:26.688268   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:26.688333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:26.721316   80762 cri.go:89] found id: ""
	I0612 21:40:26.721346   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.721357   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:26.721364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:26.721424   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:26.758842   80762 cri.go:89] found id: ""
	I0612 21:40:26.758868   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.758878   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:26.758885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:26.758957   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:26.795696   80762 cri.go:89] found id: ""
	I0612 21:40:26.795725   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.795733   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:26.795738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:26.795788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:26.834903   80762 cri.go:89] found id: ""
	I0612 21:40:26.834932   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.834941   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:26.834947   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:26.835020   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:26.872751   80762 cri.go:89] found id: ""
	I0612 21:40:26.872788   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.872796   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:26.872805   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:26.872817   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:26.952401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:26.952440   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.990548   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:26.990583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:27.042973   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:27.043029   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:27.058348   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:27.058379   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:27.133047   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:26.666430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.165063   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:25.513982   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:28.012556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:30.017664   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.315668   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:31.316817   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.634105   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:29.654113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:29.654171   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:29.700138   80762 cri.go:89] found id: ""
	I0612 21:40:29.700169   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.700179   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:29.700188   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:29.700260   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:29.751599   80762 cri.go:89] found id: ""
	I0612 21:40:29.751628   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.751638   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:29.751646   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:29.751699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:29.801971   80762 cri.go:89] found id: ""
	I0612 21:40:29.801995   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.802003   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:29.802008   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:29.802059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:29.839381   80762 cri.go:89] found id: ""
	I0612 21:40:29.839407   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.839418   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:29.839426   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:29.839484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:29.876634   80762 cri.go:89] found id: ""
	I0612 21:40:29.876661   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.876668   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:29.876675   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:29.876721   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:29.909673   80762 cri.go:89] found id: ""
	I0612 21:40:29.909707   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.909718   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:29.909726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:29.909791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:29.947984   80762 cri.go:89] found id: ""
	I0612 21:40:29.948019   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.948029   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:29.948037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:29.948099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:29.988611   80762 cri.go:89] found id: ""
	I0612 21:40:29.988639   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.988650   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:29.988660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:29.988675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:30.073180   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:30.073216   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:30.114703   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:30.114732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:30.173242   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:30.173278   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:30.189081   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:30.189112   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:30.263564   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:32.763967   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:32.776738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:32.776808   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:32.813088   80762 cri.go:89] found id: ""
	I0612 21:40:32.813115   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.813125   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:32.813132   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:32.813195   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:32.850960   80762 cri.go:89] found id: ""
	I0612 21:40:32.850987   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.850996   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:32.851004   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:32.851065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:31.166578   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.669302   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.512480   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:34.512817   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.815867   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:35.817105   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.887229   80762 cri.go:89] found id: ""
	I0612 21:40:32.887259   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.887270   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:32.887277   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:32.887346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:32.923123   80762 cri.go:89] found id: ""
	I0612 21:40:32.923148   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.923158   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:32.923164   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:32.923242   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:32.962603   80762 cri.go:89] found id: ""
	I0612 21:40:32.962628   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.962638   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:32.962644   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:32.962695   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:32.998971   80762 cri.go:89] found id: ""
	I0612 21:40:32.999025   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.999037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:32.999046   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:32.999120   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:33.037640   80762 cri.go:89] found id: ""
	I0612 21:40:33.037670   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.037680   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:33.037686   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:33.037748   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:33.073758   80762 cri.go:89] found id: ""
	I0612 21:40:33.073787   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.073794   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:33.073804   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:33.073815   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:33.124478   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:33.124512   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:33.139010   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:33.139036   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:33.207693   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:33.207716   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:33.207732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:33.287710   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:33.287746   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:35.831654   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:35.845783   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:35.845845   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:35.882097   80762 cri.go:89] found id: ""
	I0612 21:40:35.882129   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.882141   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:35.882149   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:35.882205   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:35.920931   80762 cri.go:89] found id: ""
	I0612 21:40:35.920972   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.920980   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:35.920985   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:35.921061   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:35.958689   80762 cri.go:89] found id: ""
	I0612 21:40:35.958712   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.958721   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:35.958726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:35.958774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:35.994973   80762 cri.go:89] found id: ""
	I0612 21:40:35.995028   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.995040   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:35.995048   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:35.995114   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:36.035679   80762 cri.go:89] found id: ""
	I0612 21:40:36.035707   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.035715   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:36.035721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:36.035768   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:36.071498   80762 cri.go:89] found id: ""
	I0612 21:40:36.071525   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.071534   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:36.071544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:36.071594   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:36.107367   80762 cri.go:89] found id: ""
	I0612 21:40:36.107397   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.107406   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:36.107413   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:36.107466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:36.148668   80762 cri.go:89] found id: ""
	I0612 21:40:36.148699   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.148710   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:36.148721   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:36.148736   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:36.207719   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:36.207765   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:36.223129   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:36.223158   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:36.290786   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:36.290809   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:36.290822   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:36.375361   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:36.375398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:36.165430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.165989   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:37.015936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:39.513497   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.318886   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:40.815802   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.921100   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:38.935420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:38.935491   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:38.970519   80762 cri.go:89] found id: ""
	I0612 21:40:38.970548   80762 logs.go:276] 0 containers: []
	W0612 21:40:38.970559   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:38.970567   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:38.970639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:39.005866   80762 cri.go:89] found id: ""
	I0612 21:40:39.005888   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.005896   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:39.005902   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:39.005954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:39.043619   80762 cri.go:89] found id: ""
	I0612 21:40:39.043647   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.043655   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:39.043661   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:39.043709   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:39.081311   80762 cri.go:89] found id: ""
	I0612 21:40:39.081336   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.081344   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:39.081350   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:39.081410   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:39.117326   80762 cri.go:89] found id: ""
	I0612 21:40:39.117358   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.117367   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:39.117372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:39.117423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:39.151785   80762 cri.go:89] found id: ""
	I0612 21:40:39.151819   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.151828   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:39.151835   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:39.151899   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:39.187031   80762 cri.go:89] found id: ""
	I0612 21:40:39.187057   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.187065   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:39.187071   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:39.187119   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:39.222186   80762 cri.go:89] found id: ""
	I0612 21:40:39.222212   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.222223   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:39.222233   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:39.222245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:39.276126   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:39.276164   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:39.291631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:39.291658   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:39.365615   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:39.365641   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:39.365659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:39.442548   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:39.442600   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:41.980840   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:41.996629   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:41.996686   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:42.034158   80762 cri.go:89] found id: ""
	I0612 21:40:42.034186   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.034195   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:42.034202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:42.034274   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:42.070981   80762 cri.go:89] found id: ""
	I0612 21:40:42.071011   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.071021   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:42.071028   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:42.071093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:42.108282   80762 cri.go:89] found id: ""
	I0612 21:40:42.108309   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.108316   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:42.108322   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:42.108369   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:42.146394   80762 cri.go:89] found id: ""
	I0612 21:40:42.146423   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.146434   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:42.146454   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:42.146539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:42.183577   80762 cri.go:89] found id: ""
	I0612 21:40:42.183601   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.183608   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:42.183614   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:42.183662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:42.222069   80762 cri.go:89] found id: ""
	I0612 21:40:42.222100   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.222109   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:42.222115   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:42.222168   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:42.259128   80762 cri.go:89] found id: ""
	I0612 21:40:42.259155   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.259164   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:42.259192   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:42.259282   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:42.296321   80762 cri.go:89] found id: ""
	I0612 21:40:42.296354   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.296368   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:42.296380   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:42.296400   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:42.311098   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:42.311137   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:42.386116   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:42.386144   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:42.386163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:42.467016   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:42.467054   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:42.509143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:42.509180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:40.166288   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.664817   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.665596   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.017043   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.513368   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.816702   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.316890   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.062872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:45.076570   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:45.076658   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:45.114362   80762 cri.go:89] found id: ""
	I0612 21:40:45.114394   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.114404   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:45.114412   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:45.114478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:45.151577   80762 cri.go:89] found id: ""
	I0612 21:40:45.151609   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.151620   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:45.151627   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:45.151689   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:45.188753   80762 cri.go:89] found id: ""
	I0612 21:40:45.188785   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.188795   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:45.188802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:45.188861   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:45.224775   80762 cri.go:89] found id: ""
	I0612 21:40:45.224801   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.224808   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:45.224814   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:45.224873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:45.260440   80762 cri.go:89] found id: ""
	I0612 21:40:45.260472   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.260483   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:45.260490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:45.260547   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:45.297662   80762 cri.go:89] found id: ""
	I0612 21:40:45.297697   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.297709   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:45.297716   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:45.297774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:45.335637   80762 cri.go:89] found id: ""
	I0612 21:40:45.335669   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.335682   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:45.335690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:45.335753   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:45.371523   80762 cri.go:89] found id: ""
	I0612 21:40:45.371580   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.371590   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:45.371599   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:45.371610   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:45.424029   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:45.424065   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:45.440339   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:45.440378   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:45.509504   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:45.509526   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:45.509541   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:45.591857   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:45.591893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:47.166437   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.665544   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.016561   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.511894   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.320090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.816816   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:48.135912   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:48.151271   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:48.151331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:48.192740   80762 cri.go:89] found id: ""
	I0612 21:40:48.192775   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.192788   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:48.192798   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:48.192875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:48.230440   80762 cri.go:89] found id: ""
	I0612 21:40:48.230469   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.230479   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:48.230487   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:48.230549   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:48.270892   80762 cri.go:89] found id: ""
	I0612 21:40:48.270922   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.270933   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:48.270941   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:48.270996   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:48.308555   80762 cri.go:89] found id: ""
	I0612 21:40:48.308580   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.308588   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:48.308594   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:48.308640   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:48.342705   80762 cri.go:89] found id: ""
	I0612 21:40:48.342727   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.342735   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:48.342741   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:48.342788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:48.377418   80762 cri.go:89] found id: ""
	I0612 21:40:48.377450   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.377461   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:48.377468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:48.377535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:48.413092   80762 cri.go:89] found id: ""
	I0612 21:40:48.413126   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.413141   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:48.413149   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:48.413215   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:48.447673   80762 cri.go:89] found id: ""
	I0612 21:40:48.447699   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.447708   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:48.447716   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:48.447728   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:48.488508   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:48.488542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:48.540573   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:48.540608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:48.554735   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:48.554762   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:48.632074   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:48.632098   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:48.632117   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.212336   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:51.227428   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:51.227493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:51.268124   80762 cri.go:89] found id: ""
	I0612 21:40:51.268157   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.268167   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:51.268172   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:51.268220   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:51.305751   80762 cri.go:89] found id: ""
	I0612 21:40:51.305777   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.305785   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:51.305793   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:51.305849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:51.347292   80762 cri.go:89] found id: ""
	I0612 21:40:51.347318   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.347325   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:51.347332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:51.347394   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:51.387476   80762 cri.go:89] found id: ""
	I0612 21:40:51.387501   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.387509   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:51.387515   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:51.387573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:51.431992   80762 cri.go:89] found id: ""
	I0612 21:40:51.432019   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.432029   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:51.432036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:51.432096   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:51.477204   80762 cri.go:89] found id: ""
	I0612 21:40:51.477235   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.477246   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:51.477254   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:51.477346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:51.518449   80762 cri.go:89] found id: ""
	I0612 21:40:51.518477   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.518488   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:51.518502   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:51.518562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:51.554991   80762 cri.go:89] found id: ""
	I0612 21:40:51.555015   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.555024   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:51.555033   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:51.555046   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:51.606732   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:51.606769   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:51.620512   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:51.620538   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:51.697029   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:51.697058   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:51.697074   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.775401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:51.775437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:51.666561   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.166247   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:51.512909   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.012887   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:52.315904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.316764   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.816819   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.318059   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:54.331420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:54.331509   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:54.367886   80762 cri.go:89] found id: ""
	I0612 21:40:54.367926   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.367948   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:54.367959   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:54.368047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:54.403998   80762 cri.go:89] found id: ""
	I0612 21:40:54.404023   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.404034   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:54.404041   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:54.404108   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:54.441449   80762 cri.go:89] found id: ""
	I0612 21:40:54.441480   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.441491   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:54.441498   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:54.441557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:54.476459   80762 cri.go:89] found id: ""
	I0612 21:40:54.476490   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.476500   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:54.476508   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:54.476573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:54.515337   80762 cri.go:89] found id: ""
	I0612 21:40:54.515360   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.515368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:54.515374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:54.515423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:54.551447   80762 cri.go:89] found id: ""
	I0612 21:40:54.551468   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.551475   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:54.551481   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:54.551528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:54.587082   80762 cri.go:89] found id: ""
	I0612 21:40:54.587114   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.587125   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:54.587145   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:54.587225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:54.624211   80762 cri.go:89] found id: ""
	I0612 21:40:54.624235   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.624257   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:54.624268   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:54.624282   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:54.677816   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:54.677848   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:54.693725   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:54.693749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:54.772229   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:54.772255   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:54.772273   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:54.852543   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:54.852578   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:57.397722   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:57.411082   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:57.411145   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:57.449633   80762 cri.go:89] found id: ""
	I0612 21:40:57.449662   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.449673   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:57.449680   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:57.449745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:57.489855   80762 cri.go:89] found id: ""
	I0612 21:40:57.489880   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.489889   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:57.489894   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:57.489952   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:57.528986   80762 cri.go:89] found id: ""
	I0612 21:40:57.529006   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.529014   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:57.529019   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:57.529081   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:57.566701   80762 cri.go:89] found id: ""
	I0612 21:40:57.566730   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.566739   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:57.566746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:57.566800   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:57.601114   80762 cri.go:89] found id: ""
	I0612 21:40:57.601137   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.601145   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:57.601151   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:57.601212   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:57.636120   80762 cri.go:89] found id: ""
	I0612 21:40:57.636145   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.636155   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:57.636163   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:57.636225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:57.676912   80762 cri.go:89] found id: ""
	I0612 21:40:57.676953   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.676960   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:57.676966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:57.677039   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:57.714671   80762 cri.go:89] found id: ""
	I0612 21:40:57.714691   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.714699   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:57.714707   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:57.714720   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:57.770550   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:57.770583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:57.785062   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:57.785093   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:57.853448   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:57.853468   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:57.853480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:56.167768   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.665108   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.014274   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.014535   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.816961   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.817450   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:57.939957   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:57.939999   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.493469   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:00.509746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:00.509819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:00.546582   80762 cri.go:89] found id: ""
	I0612 21:41:00.546610   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.546620   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:00.546629   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:00.546683   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:00.584229   80762 cri.go:89] found id: ""
	I0612 21:41:00.584256   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.584264   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:00.584269   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:00.584337   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:00.618679   80762 cri.go:89] found id: ""
	I0612 21:41:00.618704   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.618712   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:00.618719   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:00.618778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:00.656336   80762 cri.go:89] found id: ""
	I0612 21:41:00.656364   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.656375   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:00.656384   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:00.656457   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:00.694147   80762 cri.go:89] found id: ""
	I0612 21:41:00.694173   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.694182   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:00.694187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:00.694236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:00.733964   80762 cri.go:89] found id: ""
	I0612 21:41:00.733994   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.734006   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:00.734014   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:00.734076   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:00.771245   80762 cri.go:89] found id: ""
	I0612 21:41:00.771274   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.771287   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:00.771293   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:00.771357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:00.809118   80762 cri.go:89] found id: ""
	I0612 21:41:00.809150   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.809158   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:00.809168   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:00.809188   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:00.863479   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:00.863514   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:00.878749   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:00.878783   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:00.955800   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:00.955825   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:00.955844   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:01.037587   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:01.037618   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.666373   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.165203   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.513805   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.017922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.317115   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.817907   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.583063   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:03.597656   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:03.597732   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:03.633233   80762 cri.go:89] found id: ""
	I0612 21:41:03.633263   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.633283   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:03.633291   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:03.633357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:03.679900   80762 cri.go:89] found id: ""
	I0612 21:41:03.679930   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.679941   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:03.679948   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:03.680018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:03.718766   80762 cri.go:89] found id: ""
	I0612 21:41:03.718792   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.718800   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:03.718811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:03.718868   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:03.759404   80762 cri.go:89] found id: ""
	I0612 21:41:03.759429   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.759437   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:03.759443   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:03.759496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:03.794313   80762 cri.go:89] found id: ""
	I0612 21:41:03.794348   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.794357   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:03.794364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:03.794430   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:03.832525   80762 cri.go:89] found id: ""
	I0612 21:41:03.832546   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.832554   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:03.832559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:03.832607   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:03.872841   80762 cri.go:89] found id: ""
	I0612 21:41:03.872868   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.872878   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:03.872885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:03.872945   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:03.912736   80762 cri.go:89] found id: ""
	I0612 21:41:03.912760   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.912770   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:03.912781   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:03.912794   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:03.986645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:03.986672   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:03.986688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:04.066766   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:04.066799   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:04.108219   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:04.108250   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:04.168866   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:04.168911   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:06.684232   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:06.698359   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:06.698443   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:06.735324   80762 cri.go:89] found id: ""
	I0612 21:41:06.735350   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.735359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:06.735364   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:06.735418   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:06.771763   80762 cri.go:89] found id: ""
	I0612 21:41:06.771786   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.771794   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:06.771799   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:06.771850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:06.808151   80762 cri.go:89] found id: ""
	I0612 21:41:06.808179   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.808188   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:06.808193   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:06.808263   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:06.846099   80762 cri.go:89] found id: ""
	I0612 21:41:06.846121   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.846129   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:06.846134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:06.846182   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:06.883559   80762 cri.go:89] found id: ""
	I0612 21:41:06.883584   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.883591   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:06.883597   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:06.883645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:06.920799   80762 cri.go:89] found id: ""
	I0612 21:41:06.920830   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.920841   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:06.920849   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:06.920914   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:06.964441   80762 cri.go:89] found id: ""
	I0612 21:41:06.964472   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.964482   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:06.964490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:06.964561   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:07.000866   80762 cri.go:89] found id: ""
	I0612 21:41:07.000901   80762 logs.go:276] 0 containers: []
	W0612 21:41:07.000912   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:07.000924   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:07.000993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:07.017074   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:07.017121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:07.093873   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:07.093901   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:07.093925   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:07.171258   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:07.171293   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:07.212588   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:07.212624   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:05.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.665354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.665558   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.512109   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.512615   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.513483   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:08.316327   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:10.316456   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.767332   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:09.781184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:09.781249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:09.818966   80762 cri.go:89] found id: ""
	I0612 21:41:09.818999   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.819008   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:09.819014   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:09.819064   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:09.854714   80762 cri.go:89] found id: ""
	I0612 21:41:09.854742   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.854760   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:09.854772   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:09.854823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:09.891229   80762 cri.go:89] found id: ""
	I0612 21:41:09.891257   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.891268   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:09.891274   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:09.891335   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:09.928569   80762 cri.go:89] found id: ""
	I0612 21:41:09.928598   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.928610   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:09.928617   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:09.928680   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:09.963681   80762 cri.go:89] found id: ""
	I0612 21:41:09.963714   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.963725   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:09.963733   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:09.963819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:10.002340   80762 cri.go:89] found id: ""
	I0612 21:41:10.002368   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.002380   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:10.002388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:10.002454   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:10.041935   80762 cri.go:89] found id: ""
	I0612 21:41:10.041961   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.041972   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:10.041979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:10.042047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:10.080541   80762 cri.go:89] found id: ""
	I0612 21:41:10.080571   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.080584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:10.080598   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:10.080614   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:10.140904   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:10.140944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:10.176646   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:10.176688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:10.272147   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:10.272169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:10.272183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:10.352649   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:10.352686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:12.166618   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.665896   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.013177   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.013716   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.317177   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.317391   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.815940   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.896274   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:12.911147   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:12.911231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:12.947628   80762 cri.go:89] found id: ""
	I0612 21:41:12.947651   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.947660   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:12.947665   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:12.947726   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:12.982813   80762 cri.go:89] found id: ""
	I0612 21:41:12.982837   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.982845   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:12.982851   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:12.982898   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:13.021360   80762 cri.go:89] found id: ""
	I0612 21:41:13.021403   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.021412   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:13.021417   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:13.021468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:13.063534   80762 cri.go:89] found id: ""
	I0612 21:41:13.063566   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.063576   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:13.063585   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:13.063666   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:13.098767   80762 cri.go:89] found id: ""
	I0612 21:41:13.098796   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.098807   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:13.098816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:13.098878   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:13.140764   80762 cri.go:89] found id: ""
	I0612 21:41:13.140797   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.140809   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:13.140816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:13.140882   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:13.180356   80762 cri.go:89] found id: ""
	I0612 21:41:13.180400   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.180413   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:13.180420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:13.180482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:13.215198   80762 cri.go:89] found id: ""
	I0612 21:41:13.215227   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.215238   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:13.215249   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:13.215265   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:13.273143   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:13.273182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:13.287861   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:13.287893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:13.366052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:13.366073   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:13.366099   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:13.450980   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:13.451015   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:15.991386   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:16.005618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:16.005699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:16.047253   80762 cri.go:89] found id: ""
	I0612 21:41:16.047281   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.047289   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:16.047295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:16.047356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:16.082860   80762 cri.go:89] found id: ""
	I0612 21:41:16.082886   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.082894   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:16.082899   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:16.082948   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:16.123127   80762 cri.go:89] found id: ""
	I0612 21:41:16.123152   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.123164   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:16.123187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:16.123247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:16.167155   80762 cri.go:89] found id: ""
	I0612 21:41:16.167189   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.167199   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:16.167207   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:16.167276   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:16.204036   80762 cri.go:89] found id: ""
	I0612 21:41:16.204061   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.204071   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:16.204079   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:16.204140   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:16.246672   80762 cri.go:89] found id: ""
	I0612 21:41:16.246701   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.246712   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:16.246721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:16.246785   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:16.286820   80762 cri.go:89] found id: ""
	I0612 21:41:16.286849   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.286857   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:16.286864   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:16.286919   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:16.326622   80762 cri.go:89] found id: ""
	I0612 21:41:16.326649   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.326660   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:16.326667   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:16.326678   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:16.407492   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:16.407525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:16.448207   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:16.448236   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:16.501675   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:16.501714   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:16.518173   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:16.518202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:16.592582   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:17.166163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.167204   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.514405   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.016197   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:18.816596   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:20.817504   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.093054   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:19.107926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:19.108002   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:19.149386   80762 cri.go:89] found id: ""
	I0612 21:41:19.149411   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.149421   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:19.149429   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:19.149493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:19.188092   80762 cri.go:89] found id: ""
	I0612 21:41:19.188120   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.188131   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:19.188139   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:19.188201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:19.227203   80762 cri.go:89] found id: ""
	I0612 21:41:19.227229   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.227235   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:19.227242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:19.227301   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:19.269187   80762 cri.go:89] found id: ""
	I0612 21:41:19.269217   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.269225   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:19.269232   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:19.269294   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:19.305394   80762 cri.go:89] found id: ""
	I0612 21:41:19.305418   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.305425   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:19.305431   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:19.305480   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:19.347794   80762 cri.go:89] found id: ""
	I0612 21:41:19.347825   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.347837   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:19.347846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:19.347907   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:19.384314   80762 cri.go:89] found id: ""
	I0612 21:41:19.384346   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.384364   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:19.384372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:19.384428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:19.421782   80762 cri.go:89] found id: ""
	I0612 21:41:19.421811   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.421822   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:19.421834   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:19.421849   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:19.475969   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:19.476000   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:19.490683   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:19.490710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:19.574492   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:19.574513   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:19.574525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:19.662288   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:19.662324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:22.205404   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:22.220217   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:22.220297   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:22.256870   80762 cri.go:89] found id: ""
	I0612 21:41:22.256901   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.256913   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:22.256921   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:22.256984   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:22.290380   80762 cri.go:89] found id: ""
	I0612 21:41:22.290413   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.290425   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:22.290433   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:22.290497   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:22.324981   80762 cri.go:89] found id: ""
	I0612 21:41:22.325010   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.325019   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:22.325024   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:22.325093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:22.362900   80762 cri.go:89] found id: ""
	I0612 21:41:22.362926   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.362938   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:22.362946   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:22.363008   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:22.399004   80762 cri.go:89] found id: ""
	I0612 21:41:22.399037   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.399048   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:22.399056   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:22.399116   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:22.434306   80762 cri.go:89] found id: ""
	I0612 21:41:22.434341   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.434355   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:22.434365   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:22.434422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:22.479085   80762 cri.go:89] found id: ""
	I0612 21:41:22.479116   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.479129   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:22.479142   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:22.479228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:22.516730   80762 cri.go:89] found id: ""
	I0612 21:41:22.516755   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.516761   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:22.516769   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:22.516780   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:22.570921   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:22.570957   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:22.585409   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:22.585437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:22.667314   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:22.667342   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:22.667360   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:22.743865   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:22.743901   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:21.170060   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.666364   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:21.021658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.512541   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.316232   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.816641   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.282306   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:25.297334   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:25.297407   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:25.336610   80762 cri.go:89] found id: ""
	I0612 21:41:25.336641   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.336654   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:25.336662   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:25.336729   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:25.373307   80762 cri.go:89] found id: ""
	I0612 21:41:25.373338   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.373350   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:25.373358   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:25.373425   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:25.413141   80762 cri.go:89] found id: ""
	I0612 21:41:25.413169   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.413177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:25.413183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:25.413233   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:25.450810   80762 cri.go:89] found id: ""
	I0612 21:41:25.450842   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.450853   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:25.450862   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:25.450924   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:25.487017   80762 cri.go:89] found id: ""
	I0612 21:41:25.487049   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.487255   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:25.487269   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:25.487328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:25.524335   80762 cri.go:89] found id: ""
	I0612 21:41:25.524361   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.524371   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:25.524377   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:25.524428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:25.560394   80762 cri.go:89] found id: ""
	I0612 21:41:25.560421   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.560429   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:25.560435   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:25.560482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:25.599334   80762 cri.go:89] found id: ""
	I0612 21:41:25.599362   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.599372   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:25.599384   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:25.599399   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:25.680153   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:25.680195   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:25.726336   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:25.726377   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:25.777064   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:25.777098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:25.791978   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:25.792007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:25.868860   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:25.667028   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.164920   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.514249   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.012042   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.013658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.316543   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.816789   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.369099   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:28.382729   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:28.382786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:28.423835   80762 cri.go:89] found id: ""
	I0612 21:41:28.423865   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.423875   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:28.423889   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:28.423953   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:28.463098   80762 cri.go:89] found id: ""
	I0612 21:41:28.463127   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.463137   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:28.463144   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:28.463223   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:28.499678   80762 cri.go:89] found id: ""
	I0612 21:41:28.499707   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.499718   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:28.499726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:28.499786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:28.536057   80762 cri.go:89] found id: ""
	I0612 21:41:28.536089   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.536101   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:28.536108   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:28.536180   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:28.571052   80762 cri.go:89] found id: ""
	I0612 21:41:28.571080   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.571090   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:28.571098   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:28.571162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:28.607320   80762 cri.go:89] found id: ""
	I0612 21:41:28.607348   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.607360   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:28.607368   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:28.607427   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:28.643000   80762 cri.go:89] found id: ""
	I0612 21:41:28.643029   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.643037   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:28.643042   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:28.643113   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:28.684134   80762 cri.go:89] found id: ""
	I0612 21:41:28.684164   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.684175   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:28.684186   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:28.684201   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:28.737059   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:28.737092   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:28.753290   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:28.753320   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:28.826964   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:28.826990   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:28.827009   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:28.908874   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:28.908919   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:31.450362   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:31.465831   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:31.465912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:31.507441   80762 cri.go:89] found id: ""
	I0612 21:41:31.507465   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.507474   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:31.507482   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:31.507546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:31.541403   80762 cri.go:89] found id: ""
	I0612 21:41:31.541437   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.541450   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:31.541458   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:31.541524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:31.576367   80762 cri.go:89] found id: ""
	I0612 21:41:31.576393   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.576405   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:31.576412   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:31.576475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:31.615053   80762 cri.go:89] found id: ""
	I0612 21:41:31.615081   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.615091   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:31.615099   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:31.615159   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:31.650460   80762 cri.go:89] found id: ""
	I0612 21:41:31.650495   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.650504   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:31.650511   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:31.650580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:31.690764   80762 cri.go:89] found id: ""
	I0612 21:41:31.690792   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.690803   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:31.690810   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:31.690870   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:31.729785   80762 cri.go:89] found id: ""
	I0612 21:41:31.729809   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.729817   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:31.729822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:31.729881   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:31.772978   80762 cri.go:89] found id: ""
	I0612 21:41:31.773005   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.773013   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:31.773023   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:31.773038   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:31.830451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:31.830484   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:31.846821   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:31.846850   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:31.927289   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:31.927328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:31.927358   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:32.004814   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:32.004852   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:30.165423   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.165695   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.664959   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.515104   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:33.316674   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:35.816686   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.550931   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:34.567559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:34.567618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:34.602234   80762 cri.go:89] found id: ""
	I0612 21:41:34.602260   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.602267   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:34.602273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:34.602318   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:34.639575   80762 cri.go:89] found id: ""
	I0612 21:41:34.639598   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.639605   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:34.639610   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:34.639656   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:34.681325   80762 cri.go:89] found id: ""
	I0612 21:41:34.681360   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.681368   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:34.681374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:34.681478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:34.721405   80762 cri.go:89] found id: ""
	I0612 21:41:34.721432   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.721444   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:34.721451   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:34.721517   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:34.764344   80762 cri.go:89] found id: ""
	I0612 21:41:34.764375   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.764386   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:34.764394   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:34.764459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:34.802083   80762 cri.go:89] found id: ""
	I0612 21:41:34.802107   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.802115   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:34.802121   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:34.802181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:34.843418   80762 cri.go:89] found id: ""
	I0612 21:41:34.843441   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.843450   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:34.843455   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:34.843501   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:34.877803   80762 cri.go:89] found id: ""
	I0612 21:41:34.877832   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.877842   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:34.877852   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:34.877867   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:34.930515   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:34.930545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:34.943705   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:34.943729   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:35.024912   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:35.024931   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:35.024941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:35.109129   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:35.109165   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:37.651788   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:37.667901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:37.667964   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:37.709599   80762 cri.go:89] found id: ""
	I0612 21:41:37.709627   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.709637   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:37.709645   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:37.709700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:37.747150   80762 cri.go:89] found id: ""
	I0612 21:41:37.747191   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.747204   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:37.747212   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:37.747273   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:37.785528   80762 cri.go:89] found id: ""
	I0612 21:41:37.785552   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.785560   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:37.785567   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:37.785614   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:37.822363   80762 cri.go:89] found id: ""
	I0612 21:41:37.822390   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.822400   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:37.822408   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:37.822468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:36.666054   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.165222   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.012397   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.012503   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:38.317132   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:40.821114   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.858285   80762 cri.go:89] found id: ""
	I0612 21:41:37.858395   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.858409   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:37.858416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:37.858466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:37.897500   80762 cri.go:89] found id: ""
	I0612 21:41:37.897542   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.897556   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:37.897574   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:37.897635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:37.937878   80762 cri.go:89] found id: ""
	I0612 21:41:37.937905   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.937921   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:37.937927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:37.937985   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:37.978282   80762 cri.go:89] found id: ""
	I0612 21:41:37.978310   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.978319   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:37.978327   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:37.978341   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:38.055864   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:38.055890   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:38.055903   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:38.135883   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:38.135918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:38.178641   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:38.178668   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:38.236635   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:38.236686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:40.759426   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:40.773526   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:40.773598   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:40.819130   80762 cri.go:89] found id: ""
	I0612 21:41:40.819161   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.819190   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:40.819202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:40.819264   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:40.856176   80762 cri.go:89] found id: ""
	I0612 21:41:40.856204   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.856216   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:40.856224   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:40.856287   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:40.896709   80762 cri.go:89] found id: ""
	I0612 21:41:40.896739   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.896750   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:40.896759   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:40.896820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:40.936431   80762 cri.go:89] found id: ""
	I0612 21:41:40.936457   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.936465   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:40.936471   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:40.936528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:40.979773   80762 cri.go:89] found id: ""
	I0612 21:41:40.979809   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.979820   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:40.979828   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:40.979892   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:41.023885   80762 cri.go:89] found id: ""
	I0612 21:41:41.023910   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.023919   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:41.023925   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:41.024004   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:41.070370   80762 cri.go:89] found id: ""
	I0612 21:41:41.070396   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.070405   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:41.070411   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:41.070467   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:41.115282   80762 cri.go:89] found id: ""
	I0612 21:41:41.115311   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.115321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:41.115332   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:41.115346   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:41.128680   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:41.128710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:41.206100   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:41.206125   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:41.206140   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:41.283499   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:41.283536   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:41.323275   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:41.323307   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:41.166258   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.666600   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:41.013379   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.316659   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.816066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.875750   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:43.890156   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:43.890216   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:43.935105   80762 cri.go:89] found id: ""
	I0612 21:41:43.935135   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.935147   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:43.935155   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:43.935236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:43.980929   80762 cri.go:89] found id: ""
	I0612 21:41:43.980958   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.980967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:43.980973   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:43.981051   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:44.029387   80762 cri.go:89] found id: ""
	I0612 21:41:44.029409   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.029416   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:44.029421   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:44.029483   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:44.067415   80762 cri.go:89] found id: ""
	I0612 21:41:44.067449   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.067460   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:44.067468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:44.067528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:44.105093   80762 cri.go:89] found id: ""
	I0612 21:41:44.105117   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.105125   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:44.105131   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:44.105178   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:44.142647   80762 cri.go:89] found id: ""
	I0612 21:41:44.142680   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.142691   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:44.142699   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:44.142759   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:44.182725   80762 cri.go:89] found id: ""
	I0612 21:41:44.182756   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.182767   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:44.182775   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:44.182836   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:44.219538   80762 cri.go:89] found id: ""
	I0612 21:41:44.219568   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.219580   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:44.219593   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:44.219608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:44.272234   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:44.272267   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:44.285631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:44.285663   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:44.362453   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:44.362470   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:44.362482   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:44.444624   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:44.444656   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.985731   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:46.999749   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:46.999819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:47.035051   80762 cri.go:89] found id: ""
	I0612 21:41:47.035073   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.035080   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:47.035086   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:47.035136   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:47.077929   80762 cri.go:89] found id: ""
	I0612 21:41:47.077964   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.077975   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:47.077982   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:47.078062   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:47.111621   80762 cri.go:89] found id: ""
	I0612 21:41:47.111660   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.111671   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:47.111679   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:47.111744   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:47.150700   80762 cri.go:89] found id: ""
	I0612 21:41:47.150725   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.150733   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:47.150739   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:47.150787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:47.189547   80762 cri.go:89] found id: ""
	I0612 21:41:47.189576   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.189586   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:47.189593   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:47.189660   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:47.229482   80762 cri.go:89] found id: ""
	I0612 21:41:47.229510   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.229522   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:47.229530   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:47.229599   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:47.266798   80762 cri.go:89] found id: ""
	I0612 21:41:47.266826   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.266837   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:47.266844   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:47.266906   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:47.302256   80762 cri.go:89] found id: ""
	I0612 21:41:47.302280   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.302287   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:47.302295   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:47.302306   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:47.354485   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:47.354526   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:47.368689   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:47.368713   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:47.438219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:47.438244   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:47.438257   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:47.514199   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:47.514227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.165541   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:48.664957   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.512922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.012630   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.817136   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.317083   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.056394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:50.069348   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:50.069482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:50.106057   80762 cri.go:89] found id: ""
	I0612 21:41:50.106087   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.106097   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:50.106104   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:50.106162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:50.144532   80762 cri.go:89] found id: ""
	I0612 21:41:50.144560   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.144571   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:50.144579   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:50.144662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:50.184549   80762 cri.go:89] found id: ""
	I0612 21:41:50.184575   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.184583   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:50.184588   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:50.184648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:50.228520   80762 cri.go:89] found id: ""
	I0612 21:41:50.228555   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.228569   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:50.228578   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:50.228649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:50.265697   80762 cri.go:89] found id: ""
	I0612 21:41:50.265726   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.265737   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:50.265744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:50.265818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:50.301353   80762 cri.go:89] found id: ""
	I0612 21:41:50.301393   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.301410   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:50.301416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:50.301477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:50.337273   80762 cri.go:89] found id: ""
	I0612 21:41:50.337298   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.337306   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:50.337313   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:50.337374   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:50.383090   80762 cri.go:89] found id: ""
	I0612 21:41:50.383116   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.383126   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:50.383138   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:50.383151   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:50.454193   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:50.454240   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:50.477753   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:50.477779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:50.544052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:50.544075   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:50.544091   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:50.626441   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:50.626480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:50.666068   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.666287   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.013142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.512869   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.318942   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.816918   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.818011   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:53.168599   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:53.181682   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:53.181764   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:53.228060   80762 cri.go:89] found id: ""
	I0612 21:41:53.228096   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.228107   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:53.228115   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:53.228176   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:53.264867   80762 cri.go:89] found id: ""
	I0612 21:41:53.264890   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.264898   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:53.264903   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:53.264950   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:53.298351   80762 cri.go:89] found id: ""
	I0612 21:41:53.298378   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.298386   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:53.298392   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:53.298448   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:53.335888   80762 cri.go:89] found id: ""
	I0612 21:41:53.335910   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.335917   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:53.335922   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:53.335980   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:53.376131   80762 cri.go:89] found id: ""
	I0612 21:41:53.376166   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.376175   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:53.376183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:53.376240   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:53.412059   80762 cri.go:89] found id: ""
	I0612 21:41:53.412082   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.412088   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:53.412097   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:53.412142   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:53.446776   80762 cri.go:89] found id: ""
	I0612 21:41:53.446805   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.446816   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:53.446823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:53.446894   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:53.482411   80762 cri.go:89] found id: ""
	I0612 21:41:53.482433   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.482441   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:53.482449   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:53.482460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:53.522419   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:53.522448   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:53.573107   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:53.573141   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:53.587101   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:53.587147   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:53.665631   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:53.665660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:53.665675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.242482   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:56.255606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:56.255682   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:56.290837   80762 cri.go:89] found id: ""
	I0612 21:41:56.290861   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.290872   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:56.290880   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:56.290938   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:56.325429   80762 cri.go:89] found id: ""
	I0612 21:41:56.325458   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.325466   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:56.325471   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:56.325534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:56.359809   80762 cri.go:89] found id: ""
	I0612 21:41:56.359835   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.359845   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:56.359852   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:56.359912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:56.397775   80762 cri.go:89] found id: ""
	I0612 21:41:56.397803   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.397815   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:56.397823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:56.397884   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:56.433917   80762 cri.go:89] found id: ""
	I0612 21:41:56.433945   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.433956   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:56.433963   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:56.434028   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:56.467390   80762 cri.go:89] found id: ""
	I0612 21:41:56.467419   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.467429   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:56.467438   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:56.467496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:56.504014   80762 cri.go:89] found id: ""
	I0612 21:41:56.504048   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.504059   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:56.504067   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:56.504138   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:56.544157   80762 cri.go:89] found id: ""
	I0612 21:41:56.544187   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.544198   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:56.544209   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:56.544224   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:56.595431   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:56.595462   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:56.608897   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:56.608936   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:56.682706   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:56.682735   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:56.682749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.762598   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:56.762634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:55.166152   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:57.167363   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.666265   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.514832   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:58.515091   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.317285   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.818345   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.302898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:59.317901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:59.317976   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:59.360136   80762 cri.go:89] found id: ""
	I0612 21:41:59.360164   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.360174   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:59.360181   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:59.360249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:59.397205   80762 cri.go:89] found id: ""
	I0612 21:41:59.397233   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.397244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:59.397252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:59.397312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:59.437063   80762 cri.go:89] found id: ""
	I0612 21:41:59.437093   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.437105   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:59.437113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:59.437172   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:59.472800   80762 cri.go:89] found id: ""
	I0612 21:41:59.472827   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.472835   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:59.472843   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:59.472904   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:59.509452   80762 cri.go:89] found id: ""
	I0612 21:41:59.509474   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.509482   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:59.509487   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:59.509534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:59.546121   80762 cri.go:89] found id: ""
	I0612 21:41:59.546151   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.546162   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:59.546170   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:59.546231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:59.582983   80762 cri.go:89] found id: ""
	I0612 21:41:59.583007   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.583014   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:59.583020   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:59.583065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:59.621110   80762 cri.go:89] found id: ""
	I0612 21:41:59.621148   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.621160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:59.621171   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:59.621187   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:59.673113   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:59.673143   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:59.688106   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:59.688171   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:59.767653   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:59.767678   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:59.767692   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:59.848467   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:59.848507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.391324   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:02.406543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:02.406621   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:02.442225   80762 cri.go:89] found id: ""
	I0612 21:42:02.442255   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.442265   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:02.442273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:02.442341   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:02.479445   80762 cri.go:89] found id: ""
	I0612 21:42:02.479476   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.479487   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:02.479495   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:02.479557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:02.517654   80762 cri.go:89] found id: ""
	I0612 21:42:02.517685   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.517697   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:02.517705   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:02.517775   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:02.562743   80762 cri.go:89] found id: ""
	I0612 21:42:02.562777   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.562788   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:02.562807   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:02.562873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:02.597775   80762 cri.go:89] found id: ""
	I0612 21:42:02.597805   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.597816   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:02.597824   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:02.597886   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:02.633869   80762 cri.go:89] found id: ""
	I0612 21:42:02.633901   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.633913   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:02.633921   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:02.633979   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:02.671931   80762 cri.go:89] found id: ""
	I0612 21:42:02.671962   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.671974   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:02.671982   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:02.672044   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:02.709162   80762 cri.go:89] found id: ""
	I0612 21:42:02.709192   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.709204   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:02.709214   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:02.709228   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:02.722937   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:02.722967   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:02.798249   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:02.798275   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:02.798292   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:02.165664   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.012458   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:03.513414   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.317221   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.818062   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:02.876339   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:02.876376   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.913080   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:02.913106   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.464433   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:05.478249   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:05.478326   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:05.520742   80762 cri.go:89] found id: ""
	I0612 21:42:05.520765   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.520772   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:05.520778   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:05.520840   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:05.564864   80762 cri.go:89] found id: ""
	I0612 21:42:05.564896   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.564904   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:05.564911   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:05.564956   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:05.602917   80762 cri.go:89] found id: ""
	I0612 21:42:05.602942   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.602950   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:05.602956   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:05.603040   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:05.645073   80762 cri.go:89] found id: ""
	I0612 21:42:05.645104   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.645112   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:05.645119   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:05.645166   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:05.684133   80762 cri.go:89] found id: ""
	I0612 21:42:05.684165   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.684176   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:05.684184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:05.684249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:05.721461   80762 cri.go:89] found id: ""
	I0612 21:42:05.721489   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.721499   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:05.721506   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:05.721573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:05.756710   80762 cri.go:89] found id: ""
	I0612 21:42:05.756744   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.756755   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:05.756763   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:05.756814   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:05.792182   80762 cri.go:89] found id: ""
	I0612 21:42:05.792210   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.792220   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:05.792230   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:05.792245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:05.836597   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:05.836632   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.888704   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:05.888742   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:05.903354   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:05.903387   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:05.976146   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:05.976169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:05.976183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:06.664789   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.666830   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.013885   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.512997   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:09.316398   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.317014   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.559612   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:08.573592   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:08.573648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:08.613347   80762 cri.go:89] found id: ""
	I0612 21:42:08.613373   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.613381   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:08.613387   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:08.613449   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:08.650606   80762 cri.go:89] found id: ""
	I0612 21:42:08.650634   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.650643   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:08.650648   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:08.650692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:08.687056   80762 cri.go:89] found id: ""
	I0612 21:42:08.687087   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.687097   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:08.687105   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:08.687191   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:08.723112   80762 cri.go:89] found id: ""
	I0612 21:42:08.723138   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.723146   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:08.723161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:08.723238   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:08.764772   80762 cri.go:89] found id: ""
	I0612 21:42:08.764801   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.764812   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:08.764820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:08.764888   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:08.801914   80762 cri.go:89] found id: ""
	I0612 21:42:08.801944   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.801954   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:08.801962   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:08.802025   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:08.837991   80762 cri.go:89] found id: ""
	I0612 21:42:08.838017   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.838025   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:08.838030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:08.838084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:08.874977   80762 cri.go:89] found id: ""
	I0612 21:42:08.875016   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.875027   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:08.875039   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:08.875058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:08.931628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:08.931659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:08.946763   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:08.946791   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:09.028039   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:09.028061   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:09.028079   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:09.104350   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:09.104406   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.645114   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:11.659382   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:11.659455   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:11.702205   80762 cri.go:89] found id: ""
	I0612 21:42:11.702236   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.702246   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:11.702254   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:11.702309   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:11.748328   80762 cri.go:89] found id: ""
	I0612 21:42:11.748350   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.748357   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:11.748362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:11.748408   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:11.788980   80762 cri.go:89] found id: ""
	I0612 21:42:11.789009   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.789020   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:11.789027   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:11.789083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:11.829886   80762 cri.go:89] found id: ""
	I0612 21:42:11.829910   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.829920   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:11.829928   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:11.830006   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:11.870088   80762 cri.go:89] found id: ""
	I0612 21:42:11.870120   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.870131   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:11.870138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:11.870201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:11.907862   80762 cri.go:89] found id: ""
	I0612 21:42:11.907893   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.907905   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:11.907913   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:11.907973   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:11.947773   80762 cri.go:89] found id: ""
	I0612 21:42:11.947798   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.947808   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:11.947816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:11.947876   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:11.987806   80762 cri.go:89] found id: ""
	I0612 21:42:11.987837   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.987848   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:11.987859   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:11.987878   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:12.043451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:12.043481   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:12.057946   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:12.057980   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:12.134265   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:12.134298   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:12.134310   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:12.221276   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:12.221315   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.165305   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.165846   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.012728   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512292   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512327   80243 pod_ready.go:81] duration metric: took 4m0.006424182s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:13.512336   80243 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0612 21:42:13.512343   80243 pod_ready.go:38] duration metric: took 4m5.595554955s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:13.512359   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:42:13.512383   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:13.512428   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:13.571855   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:13.571882   80243 cri.go:89] found id: ""
	I0612 21:42:13.571892   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:13.571942   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.576505   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:13.576557   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:13.614768   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:13.614792   80243 cri.go:89] found id: ""
	I0612 21:42:13.614799   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:13.614847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.619276   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:13.619342   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:13.662832   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:13.662856   80243 cri.go:89] found id: ""
	I0612 21:42:13.662866   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:13.662931   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.667956   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:13.668031   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:13.710456   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.710479   80243 cri.go:89] found id: ""
	I0612 21:42:13.710487   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:13.710540   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.715411   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:13.715480   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:13.759924   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.759952   80243 cri.go:89] found id: ""
	I0612 21:42:13.759965   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:13.760027   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.764854   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:13.764919   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:13.804802   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:13.804826   80243 cri.go:89] found id: ""
	I0612 21:42:13.804834   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:13.804891   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.809421   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:13.809489   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:13.846580   80243 cri.go:89] found id: ""
	I0612 21:42:13.846615   80243 logs.go:276] 0 containers: []
	W0612 21:42:13.846625   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:13.846633   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:13.846685   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:13.893480   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.893504   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:13.893510   80243 cri.go:89] found id: ""
	I0612 21:42:13.893523   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:13.893571   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.898530   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.905072   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:13.905100   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.942165   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:13.942195   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.981852   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:13.981882   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:14.018431   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:14.018457   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:14.067616   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:14.067645   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:14.082853   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:14.082886   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:14.220156   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:14.220188   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:14.274395   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:14.274430   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:14.319087   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:14.319116   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:14.834792   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:14.834831   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:14.893213   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:14.893252   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:14.957423   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:14.957466   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:15.013756   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:15.013803   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.318558   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:15.318904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:14.760949   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:14.775242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:14.775356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:14.818818   80762 cri.go:89] found id: ""
	I0612 21:42:14.818847   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.818856   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:14.818863   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:14.818931   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:14.859106   80762 cri.go:89] found id: ""
	I0612 21:42:14.859146   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.859157   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:14.859164   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:14.859247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:14.894993   80762 cri.go:89] found id: ""
	I0612 21:42:14.895016   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.895026   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:14.895037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:14.895087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:14.943534   80762 cri.go:89] found id: ""
	I0612 21:42:14.943561   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.943572   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:14.943579   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:14.943645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:14.985243   80762 cri.go:89] found id: ""
	I0612 21:42:14.985267   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.985274   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:14.985280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:14.985328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:15.029253   80762 cri.go:89] found id: ""
	I0612 21:42:15.029286   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.029297   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:15.029305   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:15.029371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:15.063471   80762 cri.go:89] found id: ""
	I0612 21:42:15.063499   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.063510   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:15.063517   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:15.063580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:15.101152   80762 cri.go:89] found id: ""
	I0612 21:42:15.101181   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.101201   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:15.101212   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:15.101227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:15.178398   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:15.178416   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:15.178429   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:15.255420   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:15.255468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:15.295492   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:15.295519   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:15.345010   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:15.345051   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:15.166546   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.666141   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:19.672626   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.561453   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.579672   80243 api_server.go:72] duration metric: took 4m17.376220984s to wait for apiserver process to appear ...
	I0612 21:42:17.579702   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:42:17.579741   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.579787   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.620290   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:17.620318   80243 cri.go:89] found id: ""
	I0612 21:42:17.620325   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:17.620387   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.624598   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.624658   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.665957   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:17.665985   80243 cri.go:89] found id: ""
	I0612 21:42:17.665995   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:17.666056   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.671143   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.671222   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:17.717377   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.717396   80243 cri.go:89] found id: ""
	I0612 21:42:17.717404   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:17.717459   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.721710   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:17.721774   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:17.762712   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:17.762739   80243 cri.go:89] found id: ""
	I0612 21:42:17.762749   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:17.762807   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.767258   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:17.767329   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:17.803905   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:17.803956   80243 cri.go:89] found id: ""
	I0612 21:42:17.803969   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:17.804034   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.808260   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:17.808323   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:17.847402   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:17.847432   80243 cri.go:89] found id: ""
	I0612 21:42:17.847441   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:17.847502   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.851685   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:17.851757   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:17.897026   80243 cri.go:89] found id: ""
	I0612 21:42:17.897051   80243 logs.go:276] 0 containers: []
	W0612 21:42:17.897059   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:17.897065   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:17.897122   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:17.953764   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:17.953793   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:17.953799   80243 cri.go:89] found id: ""
	I0612 21:42:17.953808   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:17.953875   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.959822   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.965103   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:17.965127   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:18.089205   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:18.089229   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:18.153823   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:18.153876   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:18.198010   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.198037   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.255380   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.255410   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.271692   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:18.271725   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:18.318018   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:18.318049   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:18.379352   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:18.379386   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:18.437854   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:18.437884   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:18.487618   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.487650   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.934735   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.934784   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:18.983010   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:18.983050   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:19.043569   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:19.043605   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.819422   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:20.315423   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.862640   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.879256   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.879333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.918910   80762 cri.go:89] found id: ""
	I0612 21:42:17.918940   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.918951   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:17.918958   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.919018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.959701   80762 cri.go:89] found id: ""
	I0612 21:42:17.959726   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.959734   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:17.959739   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.959820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:18.005096   80762 cri.go:89] found id: ""
	I0612 21:42:18.005125   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.005142   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:18.005150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:18.005211   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:18.046877   80762 cri.go:89] found id: ""
	I0612 21:42:18.046907   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.046919   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:18.046927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:18.046992   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:18.087907   80762 cri.go:89] found id: ""
	I0612 21:42:18.087934   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.087946   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:18.087953   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:18.088016   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:18.139423   80762 cri.go:89] found id: ""
	I0612 21:42:18.139453   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.139464   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:18.139473   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:18.139535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:18.180433   80762 cri.go:89] found id: ""
	I0612 21:42:18.180459   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.180469   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:18.180476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:18.180537   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:18.220966   80762 cri.go:89] found id: ""
	I0612 21:42:18.220996   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.221005   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:18.221015   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.221033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.276006   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.276031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.290975   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:18.291026   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:18.369318   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:18.369345   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.369359   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.451336   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.451380   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.016353   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:21.030544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.030611   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.072558   80762 cri.go:89] found id: ""
	I0612 21:42:21.072583   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.072591   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:21.072596   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.072649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.106320   80762 cri.go:89] found id: ""
	I0612 21:42:21.106352   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.106364   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:21.106372   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.106431   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.139155   80762 cri.go:89] found id: ""
	I0612 21:42:21.139201   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.139212   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:21.139220   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.139285   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.178731   80762 cri.go:89] found id: ""
	I0612 21:42:21.178762   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.178772   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:21.178779   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.178838   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.213606   80762 cri.go:89] found id: ""
	I0612 21:42:21.213635   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.213645   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:21.213652   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.213707   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.250966   80762 cri.go:89] found id: ""
	I0612 21:42:21.250993   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.251009   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:21.251017   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.251084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.289434   80762 cri.go:89] found id: ""
	I0612 21:42:21.289457   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.289465   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.289474   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:21.289520   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:21.329028   80762 cri.go:89] found id: ""
	I0612 21:42:21.329058   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.329069   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:21.329080   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:21.329098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:21.342621   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:21.342648   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:21.418742   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:21.418766   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:21.418779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:21.493909   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:21.493944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.534693   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:21.534723   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.165337   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.166122   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:21.581443   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:42:21.586756   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:42:21.587670   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:42:21.587691   80243 api_server.go:131] duration metric: took 4.007982669s to wait for apiserver health ...
	I0612 21:42:21.587699   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:42:21.587720   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.587761   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.627942   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:21.627965   80243 cri.go:89] found id: ""
	I0612 21:42:21.627974   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:21.628036   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.632308   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.632380   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.674453   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:21.674474   80243 cri.go:89] found id: ""
	I0612 21:42:21.674482   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:21.674539   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.679303   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.679376   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.717454   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:21.717483   80243 cri.go:89] found id: ""
	I0612 21:42:21.717492   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:21.717555   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.722113   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.722176   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.758752   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:21.758780   80243 cri.go:89] found id: ""
	I0612 21:42:21.758790   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:21.758847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.763397   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.763465   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.802552   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:21.802574   80243 cri.go:89] found id: ""
	I0612 21:42:21.802583   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:21.802641   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.807570   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.807633   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.855093   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:21.855118   80243 cri.go:89] found id: ""
	I0612 21:42:21.855128   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:21.855212   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.860163   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.860231   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.907934   80243 cri.go:89] found id: ""
	I0612 21:42:21.907960   80243 logs.go:276] 0 containers: []
	W0612 21:42:21.907969   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.907977   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:21.908046   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:21.950085   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:21.950114   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:21.950120   80243 cri.go:89] found id: ""
	I0612 21:42:21.950128   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:21.950186   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.955633   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.960017   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:21.960038   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:22.015659   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:22.015708   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:22.074063   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:22.074093   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:22.113545   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:22.113581   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:22.152550   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:22.152583   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:22.556816   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:22.556856   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:22.602506   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:22.602542   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.655545   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:22.655577   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:22.775731   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:22.775775   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:22.827447   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:22.827476   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:22.864866   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:22.864898   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:22.903885   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:22.903912   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:22.947166   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:22.947214   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:25.472711   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:42:25.472743   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.472750   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.472755   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.472761   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.472765   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.472770   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.472777   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.472783   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.472794   80243 system_pods.go:74] duration metric: took 3.885088008s to wait for pod list to return data ...
	I0612 21:42:25.472803   80243 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:42:25.475046   80243 default_sa.go:45] found service account: "default"
	I0612 21:42:25.475072   80243 default_sa.go:55] duration metric: took 2.260179ms for default service account to be created ...
	I0612 21:42:25.475082   80243 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:42:25.479903   80243 system_pods.go:86] 8 kube-system pods found
	I0612 21:42:25.479925   80243 system_pods.go:89] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.479931   80243 system_pods.go:89] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.479935   80243 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.479940   80243 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.479944   80243 system_pods.go:89] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.479950   80243 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.479959   80243 system_pods.go:89] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.479969   80243 system_pods.go:89] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.479979   80243 system_pods.go:126] duration metric: took 4.890624ms to wait for k8s-apps to be running ...
	I0612 21:42:25.479990   80243 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:42:25.480037   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:25.496529   80243 system_svc.go:56] duration metric: took 16.534285ms WaitForService to wait for kubelet
	I0612 21:42:25.496549   80243 kubeadm.go:576] duration metric: took 4m25.293104149s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:42:25.496565   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:42:25.499277   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:42:25.499293   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:42:25.499304   80243 node_conditions.go:105] duration metric: took 2.734965ms to run NodePressure ...
	I0612 21:42:25.499314   80243 start.go:240] waiting for startup goroutines ...
	I0612 21:42:25.499320   80243 start.go:245] waiting for cluster config update ...
	I0612 21:42:25.499339   80243 start.go:254] writing updated cluster config ...
	I0612 21:42:25.499628   80243 ssh_runner.go:195] Run: rm -f paused
	I0612 21:42:25.547780   80243 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:42:25.549693   80243 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-376087" cluster and "default" namespace by default
	I0612 21:42:22.317078   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.317826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:26.818102   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.086466   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:24.101820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:24.101877   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:24.145732   80762 cri.go:89] found id: ""
	I0612 21:42:24.145757   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.145767   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:24.145774   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:24.145832   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:24.182765   80762 cri.go:89] found id: ""
	I0612 21:42:24.182788   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.182795   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:24.182801   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:24.182889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:24.235093   80762 cri.go:89] found id: ""
	I0612 21:42:24.235121   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.235129   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:24.235134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:24.235208   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:24.269788   80762 cri.go:89] found id: ""
	I0612 21:42:24.269809   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.269816   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:24.269822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:24.269867   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:24.306594   80762 cri.go:89] found id: ""
	I0612 21:42:24.306620   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.306628   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:24.306634   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:24.306693   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:24.343766   80762 cri.go:89] found id: ""
	I0612 21:42:24.343786   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.343795   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:24.343802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:24.343858   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:24.384417   80762 cri.go:89] found id: ""
	I0612 21:42:24.384447   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.384457   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:24.384464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:24.384524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:24.424935   80762 cri.go:89] found id: ""
	I0612 21:42:24.424958   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.424965   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:24.424974   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:24.424988   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:24.499737   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:24.499771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:24.537631   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:24.537667   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:24.593743   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:24.593779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:24.608078   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:24.608107   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:24.679729   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.180828   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:27.195484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:27.195552   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:27.235725   80762 cri.go:89] found id: ""
	I0612 21:42:27.235750   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.235761   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:27.235768   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:27.235816   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:27.279763   80762 cri.go:89] found id: ""
	I0612 21:42:27.279795   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.279806   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:27.279814   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:27.279875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:27.320510   80762 cri.go:89] found id: ""
	I0612 21:42:27.320534   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.320543   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:27.320554   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:27.320641   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:27.359195   80762 cri.go:89] found id: ""
	I0612 21:42:27.359227   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.359239   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:27.359247   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:27.359312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:27.394977   80762 cri.go:89] found id: ""
	I0612 21:42:27.395004   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.395015   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:27.395033   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:27.395099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:27.431905   80762 cri.go:89] found id: ""
	I0612 21:42:27.431925   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.431933   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:27.431945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:27.431990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:27.469929   80762 cri.go:89] found id: ""
	I0612 21:42:27.469954   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.469961   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:27.469967   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:27.470024   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:27.505128   80762 cri.go:89] found id: ""
	I0612 21:42:27.505153   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.505160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:27.505169   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:27.505180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:27.556739   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:27.556771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:27.572730   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:27.572757   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:27.646797   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.646819   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:27.646836   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:27.726554   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:27.726599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:26.665496   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.166323   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.316302   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:31.316334   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:30.268770   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:30.282575   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:30.282635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:30.321243   80762 cri.go:89] found id: ""
	I0612 21:42:30.321276   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.321288   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:30.321295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:30.321342   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:30.359403   80762 cri.go:89] found id: ""
	I0612 21:42:30.359432   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.359443   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:30.359451   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:30.359505   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:30.395967   80762 cri.go:89] found id: ""
	I0612 21:42:30.396006   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.396015   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:30.396028   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:30.396087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:30.438093   80762 cri.go:89] found id: ""
	I0612 21:42:30.438123   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.438132   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:30.438138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:30.438192   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:30.476859   80762 cri.go:89] found id: ""
	I0612 21:42:30.476888   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.476898   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:30.476905   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:30.476968   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:30.512998   80762 cri.go:89] found id: ""
	I0612 21:42:30.513026   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.513037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:30.513045   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:30.513106   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:30.548822   80762 cri.go:89] found id: ""
	I0612 21:42:30.548847   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.548855   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:30.548861   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:30.548908   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:30.584385   80762 cri.go:89] found id: ""
	I0612 21:42:30.584417   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.584426   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:30.584439   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:30.584454   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:30.685995   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:30.686019   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:30.686030   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:30.778789   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:30.778827   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:30.819467   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:30.819511   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:30.872563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:30.872599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:31.659828   80404 pod_ready.go:81] duration metric: took 4m0.000909177s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:31.659857   80404 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:42:31.659875   80404 pod_ready.go:38] duration metric: took 4m13.021158077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:31.659904   80404 kubeadm.go:591] duration metric: took 4m20.257268424s to restartPrimaryControlPlane
	W0612 21:42:31.659968   80404 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:31.660002   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:33.316457   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:35.316525   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:33.387831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:33.401663   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:33.401740   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:33.439690   80762 cri.go:89] found id: ""
	I0612 21:42:33.439723   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.439735   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:33.439743   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:33.439805   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:33.480330   80762 cri.go:89] found id: ""
	I0612 21:42:33.480357   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.480365   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:33.480371   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:33.480422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:33.520367   80762 cri.go:89] found id: ""
	I0612 21:42:33.520396   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.520407   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:33.520415   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:33.520476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:33.556859   80762 cri.go:89] found id: ""
	I0612 21:42:33.556892   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.556904   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:33.556911   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:33.556963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:33.595982   80762 cri.go:89] found id: ""
	I0612 21:42:33.596014   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.596024   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:33.596030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:33.596091   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:33.630942   80762 cri.go:89] found id: ""
	I0612 21:42:33.630974   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.630986   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:33.630994   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:33.631055   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:33.671649   80762 cri.go:89] found id: ""
	I0612 21:42:33.671676   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.671684   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:33.671690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:33.671734   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:33.716664   80762 cri.go:89] found id: ""
	I0612 21:42:33.716690   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.716700   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:33.716711   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:33.716726   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:33.734168   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:33.734198   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:33.826469   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:33.826491   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:33.826507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:33.915109   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:33.915142   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:33.957969   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:33.958007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:36.515258   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:36.529603   80762 kubeadm.go:591] duration metric: took 4m4.234271001s to restartPrimaryControlPlane
	W0612 21:42:36.529688   80762 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:36.529719   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:37.316720   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:39.317633   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.816783   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.545629   80762 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.01588354s)
	I0612 21:42:41.545734   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:41.561025   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:42:41.572788   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:42:41.583027   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:42:41.583052   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:42:41.583095   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:42:41.593433   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:42:41.593502   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:42:41.603944   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:42:41.613382   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:42:41.613432   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:42:41.622874   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.632270   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:42:41.632370   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.642072   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:42:41.652120   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:42:41.652194   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:42:41.662684   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:42:41.894903   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:42:43.817122   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:45.817164   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:47.817201   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:50.316134   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:52.317090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:54.318066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:56.816196   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:58.817948   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:01.316826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:03.728120   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.068094257s)
	I0612 21:43:03.728183   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:03.744990   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:03.755365   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:03.765154   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:03.765181   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:03.765226   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:03.775246   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:03.775304   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:03.785389   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:03.794999   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:03.795046   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:03.804771   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.814137   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:03.814187   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.824449   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:03.833631   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:03.833687   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:03.843203   80404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:03.895827   80404 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:03.895927   80404 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:04.040495   80404 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:04.040666   80404 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:04.040822   80404 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:04.252894   80404 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:04.254835   80404 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:04.254952   80404 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:04.255060   80404 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:04.255219   80404 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:04.255296   80404 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:04.255399   80404 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:04.255490   80404 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:04.255589   80404 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:04.255692   80404 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:04.255794   80404 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:04.255885   80404 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:04.255923   80404 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:04.255978   80404 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:04.460505   80404 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:04.640215   80404 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:04.722455   80404 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:04.862670   80404 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:05.112478   80404 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:05.113163   80404 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:05.115573   80404 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:03.817386   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:06.317207   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:05.117650   80404 out.go:204]   - Booting up control plane ...
	I0612 21:43:05.117758   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:05.117887   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:05.119410   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:05.139412   80404 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:05.139504   80404 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:05.139575   80404 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:05.268539   80404 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:05.268636   80404 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:43:05.771267   80404 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.898809ms
	I0612 21:43:05.771364   80404 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:43:11.274484   80404 kubeadm.go:309] [api-check] The API server is healthy after 5.503111655s
	I0612 21:43:11.291273   80404 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:43:11.319349   80404 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:43:11.357447   80404 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:43:11.357709   80404 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-591460 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:43:11.368936   80404 kubeadm.go:309] [bootstrap-token] Using token: 0iiegq.ujvrnknfmyshffxu
	I0612 21:43:08.816875   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:10.817031   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:11.370411   80404 out.go:204]   - Configuring RBAC rules ...
	I0612 21:43:11.370567   80404 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:43:11.375891   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:43:11.388345   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:43:11.392726   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:43:11.396867   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:43:11.401212   80404 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:43:11.683506   80404 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:43:12.114832   80404 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:43:12.683696   80404 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:43:12.683724   80404 kubeadm.go:309] 
	I0612 21:43:12.683811   80404 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:43:12.683823   80404 kubeadm.go:309] 
	I0612 21:43:12.683938   80404 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:43:12.683958   80404 kubeadm.go:309] 
	I0612 21:43:12.684002   80404 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:43:12.684070   80404 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:43:12.684129   80404 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:43:12.684146   80404 kubeadm.go:309] 
	I0612 21:43:12.684232   80404 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:43:12.684247   80404 kubeadm.go:309] 
	I0612 21:43:12.684317   80404 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:43:12.684330   80404 kubeadm.go:309] 
	I0612 21:43:12.684398   80404 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:43:12.684502   80404 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:43:12.684595   80404 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:43:12.684604   80404 kubeadm.go:309] 
	I0612 21:43:12.684700   80404 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:43:12.684807   80404 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:43:12.684816   80404 kubeadm.go:309] 
	I0612 21:43:12.684915   80404 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685061   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:43:12.685105   80404 kubeadm.go:309] 	--control-plane 
	I0612 21:43:12.685116   80404 kubeadm.go:309] 
	I0612 21:43:12.685237   80404 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:43:12.685248   80404 kubeadm.go:309] 
	I0612 21:43:12.685352   80404 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685509   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:43:12.685622   80404 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:43:12.685831   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:43:12.685848   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:43:12.687835   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:43:12.689100   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:43:12.700384   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:43:12.720228   80404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:43:12.720305   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.720330   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-591460 minikube.k8s.io/updated_at=2024_06_12T21_43_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=embed-certs-591460 minikube.k8s.io/primary=true
	I0612 21:43:12.751866   80404 ops.go:34] apiserver oom_adj: -16
	I0612 21:43:12.927644   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.428393   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.928221   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:14.428286   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.817125   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:15.316899   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:14.928273   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.428431   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.927968   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.428202   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.927882   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.428544   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.927844   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.428385   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.928105   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:19.428421   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.317080   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.317419   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:21.816670   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.928638   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.428310   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.928565   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.428377   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.928158   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.428356   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.927863   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.427955   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.928226   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.427823   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.928404   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.428367   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.514417   80404 kubeadm.go:1107] duration metric: took 12.794169259s to wait for elevateKubeSystemPrivileges
	W0612 21:43:25.514460   80404 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:43:25.514470   80404 kubeadm.go:393] duration metric: took 5m14.162212832s to StartCluster
	I0612 21:43:25.514490   80404 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.514576   80404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:43:25.518597   80404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.518811   80404 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:43:25.520571   80404 out.go:177] * Verifying Kubernetes components...
	I0612 21:43:25.518903   80404 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:43:25.519030   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:43:25.521967   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:43:25.522001   80404 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-591460"
	I0612 21:43:25.522043   80404 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-591460"
	W0612 21:43:25.522056   80404 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:43:25.522053   80404 addons.go:69] Setting default-storageclass=true in profile "embed-certs-591460"
	I0612 21:43:25.522089   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522100   80404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-591460"
	I0612 21:43:25.522089   80404 addons.go:69] Setting metrics-server=true in profile "embed-certs-591460"
	I0612 21:43:25.522158   80404 addons.go:234] Setting addon metrics-server=true in "embed-certs-591460"
	W0612 21:43:25.522170   80404 addons.go:243] addon metrics-server should already be in state true
	I0612 21:43:25.522196   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522502   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522509   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522532   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522535   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522585   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522611   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.538989   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0612 21:43:25.539032   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0612 21:43:25.539591   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.539592   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.540199   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540222   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540293   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540323   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540610   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.540736   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.541265   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541281   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541312   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.541431   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.542393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0612 21:43:25.543042   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.543604   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.543643   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.543997   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.544208   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.547823   80404 addons.go:234] Setting addon default-storageclass=true in "embed-certs-591460"
	W0612 21:43:25.547849   80404 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:43:25.547878   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.548237   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.548272   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.558486   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46589
	I0612 21:43:25.558934   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.559936   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.559967   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.560387   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.560600   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.560728   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0612 21:43:25.561116   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.561595   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.561610   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.561928   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.562198   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.562832   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565065   80404 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:43:25.563946   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0612 21:43:25.566521   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:43:25.566535   80404 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:43:25.566582   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.568114   80404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:43:24.316660   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:25.810857   80157 pod_ready.go:81] duration metric: took 4m0.000926725s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	E0612 21:43:25.810888   80157 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:43:25.810936   80157 pod_ready.go:38] duration metric: took 4m14.539121336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.810971   80157 kubeadm.go:591] duration metric: took 4m21.56451584s to restartPrimaryControlPlane
	W0612 21:43:25.811042   80157 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:43:25.811074   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:43:25.567032   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.569772   80404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:25.569794   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:43:25.569812   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.570271   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.570291   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.570363   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.570698   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.571498   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.571514   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.571539   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.571691   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.571861   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.572032   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.572851   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.572894   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.573962   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574403   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.574429   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574762   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.574974   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.575164   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.575464   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.589637   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0612 21:43:25.590155   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.591035   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.591059   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.591596   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.591845   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.593885   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.594095   80404 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:25.594112   80404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:43:25.594131   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.597769   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.598379   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598434   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.598635   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.598766   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.598860   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.762134   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:43:25.818663   80404 node_ready.go:35] waiting up to 6m0s for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830753   80404 node_ready.go:49] node "embed-certs-591460" has status "Ready":"True"
	I0612 21:43:25.830780   80404 node_ready.go:38] duration metric: took 12.086962ms for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830792   80404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.841084   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:25.929395   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:43:25.929427   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:43:26.001489   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:26.016234   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:43:26.016275   80404 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:43:26.030851   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:26.062707   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:26.062741   80404 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:43:26.157461   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:27.281342   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.279809959s)
	I0612 21:43:27.281364   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.250478112s)
	I0612 21:43:27.281392   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281405   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281408   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281420   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281712   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281730   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281739   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281748   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281861   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281916   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281933   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281942   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.283567   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283582   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283592   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283597   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.283728   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283740   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.324600   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.324625   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.324937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.324941   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.324965   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.366096   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:27.366126   80404 pod_ready.go:81] duration metric: took 1.52501871s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.366139   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.530900   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.373391416s)
	I0612 21:43:27.530973   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.530987   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.531382   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.531399   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.531406   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.531419   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.531428   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.533199   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.533212   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.533226   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.533238   80404 addons.go:475] Verifying addon metrics-server=true in "embed-certs-591460"
	I0612 21:43:27.534895   80404 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:43:27.536129   80404 addons.go:510] duration metric: took 2.017228253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:43:28.373835   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.373862   80404 pod_ready.go:81] duration metric: took 1.007715736s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.373870   80404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379042   80404 pod_ready.go:92] pod "etcd-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.379065   80404 pod_ready.go:81] duration metric: took 5.188395ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379078   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384218   80404 pod_ready.go:92] pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.384233   80404 pod_ready.go:81] duration metric: took 5.148944ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384241   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389023   80404 pod_ready.go:92] pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.389046   80404 pod_ready.go:81] duration metric: took 4.78947ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389056   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623880   80404 pod_ready.go:92] pod "kube-proxy-5l2wz" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.623902   80404 pod_ready.go:81] duration metric: took 234.83854ms for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623910   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022477   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:29.022508   80404 pod_ready.go:81] duration metric: took 398.590821ms for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022522   80404 pod_ready.go:38] duration metric: took 3.191712664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:29.022539   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:43:29.022602   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:43:29.038776   80404 api_server.go:72] duration metric: took 3.51993276s to wait for apiserver process to appear ...
	I0612 21:43:29.038805   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:43:29.038827   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:43:29.045455   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:43:29.047050   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:43:29.047072   80404 api_server.go:131] duration metric: took 8.260077ms to wait for apiserver health ...
	I0612 21:43:29.047080   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:43:29.226569   80404 system_pods.go:59] 9 kube-system pods found
	I0612 21:43:29.226603   80404 system_pods.go:61] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.226611   80404 system_pods.go:61] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.226618   80404 system_pods.go:61] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.226624   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.226631   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.226636   80404 system_pods.go:61] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.226642   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.226652   80404 system_pods.go:61] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.226659   80404 system_pods.go:61] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.226671   80404 system_pods.go:74] duration metric: took 179.583899ms to wait for pod list to return data ...
	I0612 21:43:29.226684   80404 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:43:29.422244   80404 default_sa.go:45] found service account: "default"
	I0612 21:43:29.422278   80404 default_sa.go:55] duration metric: took 195.585835ms for default service account to be created ...
	I0612 21:43:29.422290   80404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:43:29.626614   80404 system_pods.go:86] 9 kube-system pods found
	I0612 21:43:29.626650   80404 system_pods.go:89] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.626659   80404 system_pods.go:89] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.626667   80404 system_pods.go:89] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.626673   80404 system_pods.go:89] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.626680   80404 system_pods.go:89] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.626687   80404 system_pods.go:89] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.626693   80404 system_pods.go:89] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.626703   80404 system_pods.go:89] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.626714   80404 system_pods.go:89] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.626725   80404 system_pods.go:126] duration metric: took 204.428087ms to wait for k8s-apps to be running ...
	I0612 21:43:29.626737   80404 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:43:29.626793   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:29.642423   80404 system_svc.go:56] duration metric: took 15.67694ms WaitForService to wait for kubelet
	I0612 21:43:29.642457   80404 kubeadm.go:576] duration metric: took 4.123619864s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:43:29.642481   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:43:29.825804   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:43:29.825833   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:43:29.825846   80404 node_conditions.go:105] duration metric: took 183.359091ms to run NodePressure ...
	I0612 21:43:29.825860   80404 start.go:240] waiting for startup goroutines ...
	I0612 21:43:29.825868   80404 start.go:245] waiting for cluster config update ...
	I0612 21:43:29.825881   80404 start.go:254] writing updated cluster config ...
	I0612 21:43:29.826229   80404 ssh_runner.go:195] Run: rm -f paused
	I0612 21:43:29.878580   80404 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:43:29.880438   80404 out.go:177] * Done! kubectl is now configured to use "embed-certs-591460" cluster and "default" namespace by default
	I0612 21:43:57.924825   80157 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.113719509s)
	I0612 21:43:57.924912   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:57.942507   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:57.953901   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:57.964374   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:57.964396   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:57.964439   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:57.974281   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:57.974366   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:57.985000   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:57.995268   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:57.995346   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:58.005482   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.015598   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:58.015659   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.028582   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:58.038706   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:58.038756   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:58.051818   80157 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:58.110576   80157 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:58.110645   80157 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:58.274454   80157 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:58.274625   80157 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:58.274751   80157 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:58.484837   80157 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:58.486643   80157 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:58.486753   80157 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:58.486845   80157 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:58.486963   80157 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:58.487058   80157 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:58.487192   80157 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:58.487283   80157 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:58.487368   80157 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:58.487452   80157 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:58.487559   80157 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:58.487653   80157 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:58.487728   80157 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:58.487826   80157 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:58.644916   80157 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:58.789369   80157 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:58.924153   80157 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:59.044332   80157 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:59.352910   80157 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:59.353462   80157 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:59.356967   80157 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:59.359470   80157 out.go:204]   - Booting up control plane ...
	I0612 21:43:59.359596   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:59.359687   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:59.359792   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:59.378280   80157 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:59.379149   80157 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:59.379240   80157 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:59.521694   80157 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:59.521775   80157 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:44:00.036696   80157 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 514.972931ms
	I0612 21:44:00.036836   80157 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:44:05.539363   80157 kubeadm.go:309] [api-check] The API server is healthy after 5.502859715s
	I0612 21:44:05.552779   80157 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:44:05.567296   80157 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:44:05.603398   80157 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:44:05.603707   80157 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-087875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:44:05.619311   80157 kubeadm.go:309] [bootstrap-token] Using token: x2knjj.1kuv2wdowwsbztfg
	I0612 21:44:05.621026   80157 out.go:204]   - Configuring RBAC rules ...
	I0612 21:44:05.621180   80157 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:44:05.628474   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:44:05.642438   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:44:05.647606   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:44:05.651982   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:44:05.656129   80157 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:44:05.947680   80157 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:44:06.430716   80157 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:44:06.950446   80157 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:44:06.951688   80157 kubeadm.go:309] 
	I0612 21:44:06.951771   80157 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:44:06.951782   80157 kubeadm.go:309] 
	I0612 21:44:06.951857   80157 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:44:06.951866   80157 kubeadm.go:309] 
	I0612 21:44:06.951919   80157 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:44:06.952007   80157 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:44:06.952083   80157 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:44:06.952094   80157 kubeadm.go:309] 
	I0612 21:44:06.952160   80157 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:44:06.952172   80157 kubeadm.go:309] 
	I0612 21:44:06.952222   80157 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:44:06.952232   80157 kubeadm.go:309] 
	I0612 21:44:06.952285   80157 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:44:06.952375   80157 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:44:06.952460   80157 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:44:06.952476   80157 kubeadm.go:309] 
	I0612 21:44:06.952612   80157 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:44:06.952711   80157 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:44:06.952722   80157 kubeadm.go:309] 
	I0612 21:44:06.952819   80157 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.952933   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:44:06.952963   80157 kubeadm.go:309] 	--control-plane 
	I0612 21:44:06.952985   80157 kubeadm.go:309] 
	I0612 21:44:06.953100   80157 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:44:06.953114   80157 kubeadm.go:309] 
	I0612 21:44:06.953219   80157 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.953373   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:44:06.953943   80157 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:44:06.953986   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:44:06.954003   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:44:06.956587   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:44:06.957989   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:44:06.972666   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:44:07.000720   80157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:44:07.000822   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.000839   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-087875 minikube.k8s.io/updated_at=2024_06_12T21_44_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=no-preload-087875 minikube.k8s.io/primary=true
	I0612 21:44:07.201613   80157 ops.go:34] apiserver oom_adj: -16
	I0612 21:44:07.201713   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.702791   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.201886   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.702020   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.202755   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.702683   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.202007   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.702272   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.201764   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.702383   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.201880   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.702587   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.202524   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.702498   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.202157   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.702197   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.201852   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.702444   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.201919   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.701722   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.202307   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.701823   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.202602   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.702354   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.202207   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.308654   80157 kubeadm.go:1107] duration metric: took 12.307897648s to wait for elevateKubeSystemPrivileges
	W0612 21:44:19.308699   80157 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:44:19.308709   80157 kubeadm.go:393] duration metric: took 5m15.118303799s to StartCluster
	I0612 21:44:19.308738   80157 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.308825   80157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:44:19.311295   80157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.311587   80157 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:44:19.313263   80157 out.go:177] * Verifying Kubernetes components...
	I0612 21:44:19.311693   80157 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:44:19.311780   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:44:19.315137   80157 addons.go:69] Setting storage-provisioner=true in profile "no-preload-087875"
	I0612 21:44:19.315148   80157 addons.go:69] Setting default-storageclass=true in profile "no-preload-087875"
	I0612 21:44:19.315192   80157 addons.go:234] Setting addon storage-provisioner=true in "no-preload-087875"
	I0612 21:44:19.315201   80157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-087875"
	I0612 21:44:19.315202   80157 addons.go:69] Setting metrics-server=true in profile "no-preload-087875"
	I0612 21:44:19.315240   80157 addons.go:234] Setting addon metrics-server=true in "no-preload-087875"
	W0612 21:44:19.315255   80157 addons.go:243] addon metrics-server should already be in state true
	I0612 21:44:19.315296   80157 host.go:66] Checking if "no-preload-087875" exists ...
	W0612 21:44:19.315209   80157 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:44:19.315397   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.315139   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:44:19.315636   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315666   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315653   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315698   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315731   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315750   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.331461   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40419
	I0612 21:44:19.331495   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0612 21:44:19.331924   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332019   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332446   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332466   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332580   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332603   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332866   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.332911   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.333087   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.333484   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.333508   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.334462   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0612 21:44:19.334922   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.335447   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.335474   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.335812   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.336376   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.336408   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.336657   80157 addons.go:234] Setting addon default-storageclass=true in "no-preload-087875"
	W0612 21:44:19.336675   80157 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:44:19.336701   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.337047   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.337078   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.350724   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0612 21:44:19.351308   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.351869   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.351897   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.352272   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.352503   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.354434   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0612 21:44:19.354532   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.356594   80157 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:44:19.354927   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.355284   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37489
	I0612 21:44:19.357181   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.358026   80157 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.357219   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358040   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:44:19.358048   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.358058   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.358407   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.358560   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358577   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.359024   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.359035   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.359069   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.359408   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.361013   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.361524   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.363337   80157 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:44:19.361921   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.362312   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.364713   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:44:19.364727   80157 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:44:19.364736   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.364744   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.365021   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.365260   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.365419   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.368572   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.368971   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.368988   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.369144   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.369316   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.369431   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.369538   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.377220   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0612 21:44:19.377598   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.378595   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.378621   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.378931   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.379127   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.380646   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.380844   80157 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.380857   80157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:44:19.380869   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.383763   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384201   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.384216   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384504   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.384660   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.384816   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.384956   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.516231   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:44:19.539205   80157 node_ready.go:35] waiting up to 6m0s for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546948   80157 node_ready.go:49] node "no-preload-087875" has status "Ready":"True"
	I0612 21:44:19.546972   80157 node_ready.go:38] duration metric: took 7.739123ms for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546985   80157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:19.553454   80157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562831   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.562854   80157 pod_ready.go:81] duration metric: took 9.377758ms for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562862   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568274   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.568296   80157 pod_ready.go:81] duration metric: took 5.425162ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568306   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.572960   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.572991   80157 pod_ready.go:81] duration metric: took 4.669828ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.573002   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.620522   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:44:19.620548   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:44:19.654325   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.681762   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:44:19.681800   80157 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:44:19.699701   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.774496   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:19.774526   80157 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:44:19.874891   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:20.590260   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590292   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590276   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590360   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590587   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.590634   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.590644   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.590651   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590658   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592402   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592462   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592410   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592411   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592414   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592551   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592476   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.592655   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592952   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.593069   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.593093   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.634339   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.634370   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.634813   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.634864   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.634880   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.321337   80157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.446394551s)
	I0612 21:44:21.321389   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.321403   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.321802   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:21.321827   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.321968   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322012   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.322023   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.322278   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.322294   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322305   80157 addons.go:475] Verifying addon metrics-server=true in "no-preload-087875"
	I0612 21:44:21.324652   80157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:44:21.326653   80157 addons.go:510] duration metric: took 2.01495884s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:44:21.589251   80157 pod_ready.go:92] pod "kube-proxy-lnhzt" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.589290   80157 pod_ready.go:81] duration metric: took 2.016278458s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.589305   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652083   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.652122   80157 pod_ready.go:81] duration metric: took 62.805318ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652136   80157 pod_ready.go:38] duration metric: took 2.105136343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:21.652156   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:44:21.652237   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:44:21.683110   80157 api_server.go:72] duration metric: took 2.371482611s to wait for apiserver process to appear ...
	I0612 21:44:21.683148   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:44:21.683187   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:44:21.704637   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:44:21.714032   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:44:21.714061   80157 api_server.go:131] duration metric: took 30.904631ms to wait for apiserver health ...
	I0612 21:44:21.714070   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:44:21.751484   80157 system_pods.go:59] 9 kube-system pods found
	I0612 21:44:21.751520   80157 system_pods.go:61] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:21.751526   80157 system_pods.go:61] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:21.751532   80157 system_pods.go:61] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:21.751537   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:21.751544   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:21.751548   80157 system_pods.go:61] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:21.751552   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:21.751560   80157 system_pods.go:61] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:21.751568   80157 system_pods.go:61] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:21.751581   80157 system_pods.go:74] duration metric: took 37.503399ms to wait for pod list to return data ...
	I0612 21:44:21.751595   80157 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:44:21.943440   80157 default_sa.go:45] found service account: "default"
	I0612 21:44:21.943465   80157 default_sa.go:55] duration metric: took 191.863221ms for default service account to be created ...
	I0612 21:44:21.943473   80157 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:44:22.146922   80157 system_pods.go:86] 9 kube-system pods found
	I0612 21:44:22.146960   80157 system_pods.go:89] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:22.146969   80157 system_pods.go:89] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:22.146975   80157 system_pods.go:89] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:22.146982   80157 system_pods.go:89] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:22.146988   80157 system_pods.go:89] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:22.146994   80157 system_pods.go:89] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:22.147000   80157 system_pods.go:89] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:22.147012   80157 system_pods.go:89] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:22.147030   80157 system_pods.go:89] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:22.147042   80157 system_pods.go:126] duration metric: took 203.562938ms to wait for k8s-apps to be running ...
	I0612 21:44:22.147056   80157 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:44:22.147110   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:22.167568   80157 system_svc.go:56] duration metric: took 20.500218ms WaitForService to wait for kubelet
	I0612 21:44:22.167606   80157 kubeadm.go:576] duration metric: took 2.855984791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:44:22.167627   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:44:22.343015   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:44:22.343039   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:44:22.343051   80157 node_conditions.go:105] duration metric: took 175.419211ms to run NodePressure ...
	I0612 21:44:22.343064   80157 start.go:240] waiting for startup goroutines ...
	I0612 21:44:22.343073   80157 start.go:245] waiting for cluster config update ...
	I0612 21:44:22.343085   80157 start.go:254] writing updated cluster config ...
	I0612 21:44:22.343387   80157 ssh_runner.go:195] Run: rm -f paused
	I0612 21:44:22.391092   80157 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:44:22.393268   80157 out.go:177] * Done! kubectl is now configured to use "no-preload-087875" cluster and "default" namespace by default
	I0612 21:44:37.700712   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:44:37.700862   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:44:37.702455   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:44:37.702552   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:44:37.702639   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:44:37.702749   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:44:37.702887   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:44:37.702992   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:44:37.704955   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:44:37.705032   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:44:37.705088   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:44:37.705159   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:44:37.705228   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:44:37.705289   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:44:37.705368   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:44:37.705467   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:44:37.705538   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:44:37.705620   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:44:37.705683   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:44:37.705723   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:44:37.705773   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:44:37.705816   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:44:37.705861   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:44:37.705917   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:44:37.705964   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:44:37.706062   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:44:37.706172   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:44:37.706231   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:44:37.706288   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:44:37.707753   80762 out.go:204]   - Booting up control plane ...
	I0612 21:44:37.707857   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:44:37.707931   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:44:37.707994   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:44:37.708064   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:44:37.708197   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:44:37.708251   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:44:37.708344   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708536   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708600   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708770   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708864   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709067   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709133   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709340   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709441   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709638   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709650   80762 kubeadm.go:309] 
	I0612 21:44:37.709683   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:44:37.709721   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:44:37.709728   80762 kubeadm.go:309] 
	I0612 21:44:37.709777   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:44:37.709817   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:44:37.709910   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:44:37.709917   80762 kubeadm.go:309] 
	I0612 21:44:37.710018   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:44:37.710052   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:44:37.710083   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:44:37.710089   80762 kubeadm.go:309] 
	I0612 21:44:37.710184   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:44:37.710259   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:44:37.710265   80762 kubeadm.go:309] 
	I0612 21:44:37.710359   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:44:37.710431   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:44:37.710497   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:44:37.710563   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:44:37.710607   80762 kubeadm.go:309] 
	W0612 21:44:37.710666   80762 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0612 21:44:37.710709   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:44:38.170461   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:38.186842   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:44:38.198380   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:44:38.198400   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:44:38.198454   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:44:38.208876   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:44:38.208948   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:44:38.219641   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:44:38.229622   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:44:38.229685   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:44:38.240153   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.251342   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:44:38.251401   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.262662   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:44:38.272898   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:44:38.272954   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:44:38.283213   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:44:38.501637   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:46:34.582636   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:46:34.582745   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:46:34.584702   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:46:34.584775   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:46:34.584898   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:46:34.585029   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:46:34.585172   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:46:34.585263   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:46:34.587030   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:46:34.587101   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:46:34.587160   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:46:34.587260   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:46:34.587349   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:46:34.587446   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:46:34.587521   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:46:34.587609   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:46:34.587697   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:46:34.587803   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:46:34.587886   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:46:34.588014   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:46:34.588097   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:46:34.588177   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:46:34.588268   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:46:34.588381   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:46:34.588447   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:46:34.588558   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:46:34.588659   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:46:34.588719   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:46:34.588816   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:46:34.590114   80762 out.go:204]   - Booting up control plane ...
	I0612 21:46:34.590226   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:46:34.590326   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:46:34.590444   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:46:34.590527   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:46:34.590710   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:46:34.590778   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:46:34.590847   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591054   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591149   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591411   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591508   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591743   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591846   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592108   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592205   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592395   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592403   80762 kubeadm.go:309] 
	I0612 21:46:34.592436   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:46:34.592485   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:46:34.592500   80762 kubeadm.go:309] 
	I0612 21:46:34.592535   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:46:34.592563   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:46:34.592677   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:46:34.592688   80762 kubeadm.go:309] 
	I0612 21:46:34.592820   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:46:34.592855   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:46:34.592883   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:46:34.592890   80762 kubeadm.go:309] 
	I0612 21:46:34.593007   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:46:34.593107   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:46:34.593116   80762 kubeadm.go:309] 
	I0612 21:46:34.593224   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:46:34.593342   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:46:34.593426   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:46:34.593494   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:46:34.593552   80762 kubeadm.go:393] duration metric: took 8m2.356271864s to StartCluster
	I0612 21:46:34.593558   80762 kubeadm.go:309] 
	I0612 21:46:34.593589   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:46:34.593639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:46:34.643842   80762 cri.go:89] found id: ""
	I0612 21:46:34.643876   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.643887   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:46:34.643905   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:46:34.643982   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:46:34.682878   80762 cri.go:89] found id: ""
	I0612 21:46:34.682899   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.682906   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:46:34.682912   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:46:34.682961   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:46:34.721931   80762 cri.go:89] found id: ""
	I0612 21:46:34.721955   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.721964   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:46:34.721969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:46:34.722021   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:46:34.759233   80762 cri.go:89] found id: ""
	I0612 21:46:34.759266   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.759274   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:46:34.759280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:46:34.759333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:46:34.800142   80762 cri.go:89] found id: ""
	I0612 21:46:34.800176   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.800186   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:46:34.800194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:46:34.800256   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:46:34.836746   80762 cri.go:89] found id: ""
	I0612 21:46:34.836774   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.836784   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:46:34.836791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:46:34.836850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:46:34.876108   80762 cri.go:89] found id: ""
	I0612 21:46:34.876138   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.876147   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:46:34.876153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:46:34.876202   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:46:34.912272   80762 cri.go:89] found id: ""
	I0612 21:46:34.912294   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.912301   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:46:34.912310   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:46:34.912324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:46:34.997300   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:46:34.997331   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:46:34.997347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:46:35.105602   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:46:35.105638   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:46:35.152818   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:46:35.152857   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:46:35.216504   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:46:35.216545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0612 21:46:35.239531   80762 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0612 21:46:35.239581   80762 out.go:239] * 
	W0612 21:46:35.239646   80762 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.239672   80762 out.go:239] * 
	W0612 21:46:35.240600   80762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:46:35.244822   80762 out.go:177] 
	W0612 21:46:35.246072   80762 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.246137   80762 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0612 21:46:35.246164   80762 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0612 21:46:35.247768   80762 out.go:177] 
	
	
	==> CRI-O <==
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.574466795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229087574442727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62068fcd-391b-41ed-87d9-4b5a2504021f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.575193854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c10e578-7976-43dd-88f1-6dd0f3f86f0e name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.575330986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c10e578-7976-43dd-88f1-6dd0f3f86f0e name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.575647443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228309199167879,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c48508b30ca15f6432a84141dd0b289e83aa9987e92fc3f9545889492605b8,PodSandboxId:5586f183312b241e003e9f7240dd5a617efdb6a93ac13d42d3956a4274f4b20f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718228289028635935,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d9ff0c0-b2e4-4535-b3e5-3cd361febf51,},Annotations:map[string]string{io.kubernetes.container.hash: 629593af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266,PodSandboxId:fc1a2a9794167dad660926e30bd665fa3f91e43e219af59cb20c26bd5ad50f52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228286137199473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cllsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e26b02-5b11-490e-a1b9-0f12c5ba3830,},Annotations:map[string]string{io.kubernetes.container.hash: c6223842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd,PodSandboxId:298152ff9d202bf8c1ded25c6afd2cb835cb421a74775d6f68e79b86790270c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718228278560675981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lrgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f9342e-2
677-44be-8e22-2a8f45feeb57,},Annotations:map[string]string{io.kubernetes.container.hash: 2db9a195,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718228278389385625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b
-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1,PodSandboxId:d24ba04db930e91176979c74dc3dd4d42613be658694683f9b1940988093f274,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228273661486870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b18924acfd4d72129dec681761dc7e0d,},Annotations:map[
string]string{io.kubernetes.container.hash: 547b9474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f,PodSandboxId:d671a1828f6193b249faf9a4b6a8e3003ecfb8a2730173bf2597aa8131f9c0f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228273745792333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec9370d627717114473c25d049fcefb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249,PodSandboxId:ab600e8cd42e1d241ed0afd1bbddb5a35619bcbc31cdc206def77155a5713dc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228273626794447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb301e61c8490e956bfefe1ed20670f5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 5e727e58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031,PodSandboxId:fdb5a19c0f4892ccc5be280826a890dadf1554e5e56ad554e138a6bd09a3f163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228273633333558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beacfc2e631a20f6822e78f2107d4e
bb,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c10e578-7976-43dd-88f1-6dd0f3f86f0e name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.613906733Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9763fcc-ed1b-40b4-949d-2f04af8c805c name=/runtime.v1.RuntimeService/Version
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.614203298Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9763fcc-ed1b-40b4-949d-2f04af8c805c name=/runtime.v1.RuntimeService/Version
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.615455628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5b088a0-2ca7-48e6-9f43-ac89b1051751 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.615908076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229087615880716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5b088a0-2ca7-48e6-9f43-ac89b1051751 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.616494378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35ad6e6a-5e20-4fe0-bdb7-461d0fb468d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.616544783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35ad6e6a-5e20-4fe0-bdb7-461d0fb468d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.616755687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228309199167879,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c48508b30ca15f6432a84141dd0b289e83aa9987e92fc3f9545889492605b8,PodSandboxId:5586f183312b241e003e9f7240dd5a617efdb6a93ac13d42d3956a4274f4b20f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718228289028635935,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d9ff0c0-b2e4-4535-b3e5-3cd361febf51,},Annotations:map[string]string{io.kubernetes.container.hash: 629593af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266,PodSandboxId:fc1a2a9794167dad660926e30bd665fa3f91e43e219af59cb20c26bd5ad50f52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228286137199473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cllsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e26b02-5b11-490e-a1b9-0f12c5ba3830,},Annotations:map[string]string{io.kubernetes.container.hash: c6223842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd,PodSandboxId:298152ff9d202bf8c1ded25c6afd2cb835cb421a74775d6f68e79b86790270c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718228278560675981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lrgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f9342e-2
677-44be-8e22-2a8f45feeb57,},Annotations:map[string]string{io.kubernetes.container.hash: 2db9a195,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718228278389385625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b
-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1,PodSandboxId:d24ba04db930e91176979c74dc3dd4d42613be658694683f9b1940988093f274,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228273661486870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b18924acfd4d72129dec681761dc7e0d,},Annotations:map[
string]string{io.kubernetes.container.hash: 547b9474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f,PodSandboxId:d671a1828f6193b249faf9a4b6a8e3003ecfb8a2730173bf2597aa8131f9c0f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228273745792333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec9370d627717114473c25d049fcefb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249,PodSandboxId:ab600e8cd42e1d241ed0afd1bbddb5a35619bcbc31cdc206def77155a5713dc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228273626794447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb301e61c8490e956bfefe1ed20670f5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 5e727e58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031,PodSandboxId:fdb5a19c0f4892ccc5be280826a890dadf1554e5e56ad554e138a6bd09a3f163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228273633333558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beacfc2e631a20f6822e78f2107d4e
bb,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35ad6e6a-5e20-4fe0-bdb7-461d0fb468d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.657097912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8201b4bf-9117-4969-8222-70e988c3708c name=/runtime.v1.RuntimeService/Version
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.657188711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8201b4bf-9117-4969-8222-70e988c3708c name=/runtime.v1.RuntimeService/Version
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.658779340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77c3050b-b27b-4711-898e-ed7df1ad71dc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.659218416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229087659196925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77c3050b-b27b-4711-898e-ed7df1ad71dc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.659882844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c5013a5-3722-4926-8201-ffe808c407f8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.659941719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c5013a5-3722-4926-8201-ffe808c407f8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.660212731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228309199167879,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c48508b30ca15f6432a84141dd0b289e83aa9987e92fc3f9545889492605b8,PodSandboxId:5586f183312b241e003e9f7240dd5a617efdb6a93ac13d42d3956a4274f4b20f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718228289028635935,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d9ff0c0-b2e4-4535-b3e5-3cd361febf51,},Annotations:map[string]string{io.kubernetes.container.hash: 629593af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266,PodSandboxId:fc1a2a9794167dad660926e30bd665fa3f91e43e219af59cb20c26bd5ad50f52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228286137199473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cllsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e26b02-5b11-490e-a1b9-0f12c5ba3830,},Annotations:map[string]string{io.kubernetes.container.hash: c6223842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd,PodSandboxId:298152ff9d202bf8c1ded25c6afd2cb835cb421a74775d6f68e79b86790270c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718228278560675981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lrgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f9342e-2
677-44be-8e22-2a8f45feeb57,},Annotations:map[string]string{io.kubernetes.container.hash: 2db9a195,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718228278389385625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b
-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1,PodSandboxId:d24ba04db930e91176979c74dc3dd4d42613be658694683f9b1940988093f274,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228273661486870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b18924acfd4d72129dec681761dc7e0d,},Annotations:map[
string]string{io.kubernetes.container.hash: 547b9474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f,PodSandboxId:d671a1828f6193b249faf9a4b6a8e3003ecfb8a2730173bf2597aa8131f9c0f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228273745792333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec9370d627717114473c25d049fcefb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249,PodSandboxId:ab600e8cd42e1d241ed0afd1bbddb5a35619bcbc31cdc206def77155a5713dc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228273626794447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb301e61c8490e956bfefe1ed20670f5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 5e727e58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031,PodSandboxId:fdb5a19c0f4892ccc5be280826a890dadf1554e5e56ad554e138a6bd09a3f163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228273633333558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beacfc2e631a20f6822e78f2107d4e
bb,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c5013a5-3722-4926-8201-ffe808c407f8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.693473383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8d806c3-c59d-47d1-b9b8-5d22c0cb2539 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.693545944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8d806c3-c59d-47d1-b9b8-5d22c0cb2539 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.694586277Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c2625104-c27a-452c-b261-41a6067ba0ae name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.694990052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229087694966452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2625104-c27a-452c-b261-41a6067ba0ae name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.695660469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=173700ae-1f7c-49bc-b29a-d98fac97b7ca name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.695709942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=173700ae-1f7c-49bc-b29a-d98fac97b7ca name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:51:27 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:51:27.695949091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228309199167879,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c48508b30ca15f6432a84141dd0b289e83aa9987e92fc3f9545889492605b8,PodSandboxId:5586f183312b241e003e9f7240dd5a617efdb6a93ac13d42d3956a4274f4b20f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718228289028635935,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d9ff0c0-b2e4-4535-b3e5-3cd361febf51,},Annotations:map[string]string{io.kubernetes.container.hash: 629593af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266,PodSandboxId:fc1a2a9794167dad660926e30bd665fa3f91e43e219af59cb20c26bd5ad50f52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228286137199473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cllsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e26b02-5b11-490e-a1b9-0f12c5ba3830,},Annotations:map[string]string{io.kubernetes.container.hash: c6223842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd,PodSandboxId:298152ff9d202bf8c1ded25c6afd2cb835cb421a74775d6f68e79b86790270c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718228278560675981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lrgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f9342e-2
677-44be-8e22-2a8f45feeb57,},Annotations:map[string]string{io.kubernetes.container.hash: 2db9a195,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718228278389385625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b
-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1,PodSandboxId:d24ba04db930e91176979c74dc3dd4d42613be658694683f9b1940988093f274,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228273661486870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b18924acfd4d72129dec681761dc7e0d,},Annotations:map[
string]string{io.kubernetes.container.hash: 547b9474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f,PodSandboxId:d671a1828f6193b249faf9a4b6a8e3003ecfb8a2730173bf2597aa8131f9c0f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228273745792333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec9370d627717114473c25d049fcefb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249,PodSandboxId:ab600e8cd42e1d241ed0afd1bbddb5a35619bcbc31cdc206def77155a5713dc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228273626794447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb301e61c8490e956bfefe1ed20670f5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 5e727e58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031,PodSandboxId:fdb5a19c0f4892ccc5be280826a890dadf1554e5e56ad554e138a6bd09a3f163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228273633333558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beacfc2e631a20f6822e78f2107d4e
bb,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=173700ae-1f7c-49bc-b29a-d98fac97b7ca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2ec17a45953ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   c2c1a3fc0fb25       storage-provisioner
	c1c48508b30ca       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   5586f183312b2       busybox
	9247a0b60b235       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   fc1a2a9794167       coredns-7db6d8ff4d-cllsk
	976fbe2261bae       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago      Running             kube-proxy                1                   298152ff9d202       kube-proxy-8lrgv
	58692ec525480       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   c2c1a3fc0fb25       storage-provisioner
	74488395e0d90       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago      Running             kube-scheduler            1                   d671a1828f619       kube-scheduler-default-k8s-diff-port-376087
	d482ceea3aaf0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   d24ba04db930e       etcd-default-k8s-diff-port-376087
	73a7a9216e1bd       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      13 minutes ago      Running             kube-controller-manager   1                   fdb5a19c0f489       kube-controller-manager-default-k8s-diff-port-376087
	5a2481a728ef8       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      13 minutes ago      Running             kube-apiserver            1                   ab600e8cd42e1       kube-apiserver-default-k8s-diff-port-376087
	
	
	==> coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56775 - 63860 "HINFO IN 801067738441133078.377083572015025222. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.022540843s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-376087
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-376087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=default-k8s-diff-port-376087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T21_29_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:29:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-376087
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:51:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:48:40 +0000   Wed, 12 Jun 2024 21:29:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:48:40 +0000   Wed, 12 Jun 2024 21:29:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:48:40 +0000   Wed, 12 Jun 2024 21:29:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:48:40 +0000   Wed, 12 Jun 2024 21:38:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.80
	  Hostname:    default-k8s-diff-port-376087
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fd9d83f072143639931caba4728e6dc
	  System UUID:                1fd9d83f-0721-4363-9931-caba4728e6dc
	  Boot ID:                    ea378891-f3db-4d1d-84fa-ecfd5d125b38
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-cllsk                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-376087                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-376087             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-376087    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-8lrgv                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-376087             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-xj4xk                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-376087 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-376087 event: Registered Node default-k8s-diff-port-376087 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-376087 event: Registered Node default-k8s-diff-port-376087 in Controller
	
	
	==> dmesg <==
	[Jun12 21:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051535] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040107] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.514924] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.486366] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.617121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.688679] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.061892] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066974] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.200501] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.123783] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.299113] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +4.492551] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.059931] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.926466] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +5.583934] kauditd_printk_skb: 97 callbacks suppressed
	[Jun12 21:38] systemd-fstab-generator[1548]: Ignoring "noauto" option for root device
	[  +3.745304] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.061558] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] <==
	{"level":"info","ts":"2024-06-12T21:38:31.343566Z","caller":"traceutil/trace.go:171","msg":"trace[1813186924] linearizableReadLoop","detail":"{readStateIndex:653; appliedIndex:653; }","duration":"340.825565ms","start":"2024-06-12T21:38:31.002728Z","end":"2024-06-12T21:38:31.343554Z","steps":["trace[1813186924] 'read index received'  (duration: 340.819216ms)","trace[1813186924] 'applied index is now lower than readState.Index'  (duration: 5.357µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T21:38:31.343846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.05472ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-376087\" ","response":"range_response_count:1 size:5801"}
	{"level":"info","ts":"2024-06-12T21:38:31.344639Z","caller":"traceutil/trace.go:171","msg":"trace[1892224416] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-376087; range_end:; response_count:1; response_revision:615; }","duration":"341.911964ms","start":"2024-06-12T21:38:31.002713Z","end":"2024-06-12T21:38:31.344625Z","steps":["trace[1892224416] 'agreement among raft nodes before linearized reading'  (duration: 340.876431ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T21:38:31.344712Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T21:38:31.002707Z","time spent":"341.991305ms","remote":"127.0.0.1:34150","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5824,"request content":"key:\"/registry/minions/default-k8s-diff-port-376087\" "}
	{"level":"warn","ts":"2024-06-12T21:38:32.129573Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"488.891024ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6214562927279338270 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk\" mod_revision:603 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk\" value_size:4210 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-12T21:38:32.129676Z","caller":"traceutil/trace.go:171","msg":"trace[942067401] linearizableReadLoop","detail":"{readStateIndex:654; appliedIndex:653; }","duration":"781.179722ms","start":"2024-06-12T21:38:31.348484Z","end":"2024-06-12T21:38:32.129664Z","steps":["trace[942067401] 'read index received'  (duration: 291.982264ms)","trace[942067401] 'applied index is now lower than readState.Index'  (duration: 489.196437ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-12T21:38:32.129908Z","caller":"traceutil/trace.go:171","msg":"trace[1239788338] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"1.123546837s","start":"2024-06-12T21:38:31.00635Z","end":"2024-06-12T21:38:32.129897Z","steps":["trace[1239788338] 'process raft request'  (duration: 634.277088ms)","trace[1239788338] 'compare'  (duration: 488.799457ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T21:38:32.129991Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T21:38:31.006338Z","time spent":"1.123615483s","remote":"127.0.0.1:34158","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4276,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk\" mod_revision:603 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk\" value_size:4210 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk\" > >"}
	{"level":"warn","ts":"2024-06-12T21:38:32.130216Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"781.727375ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-xj4xk.17d85f858e9fbe3f\" ","response":"range_response_count:1 size:804"}
	{"level":"info","ts":"2024-06-12T21:38:32.130257Z","caller":"traceutil/trace.go:171","msg":"trace[786363981] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-xj4xk.17d85f858e9fbe3f; range_end:; response_count:1; response_revision:616; }","duration":"781.784273ms","start":"2024-06-12T21:38:31.348465Z","end":"2024-06-12T21:38:32.130249Z","steps":["trace[786363981] 'agreement among raft nodes before linearized reading'  (duration: 781.687197ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T21:38:32.130276Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T21:38:31.348452Z","time spent":"781.819641ms","remote":"127.0.0.1:34034","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":827,"request content":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-xj4xk.17d85f858e9fbe3f\" "}
	{"level":"warn","ts":"2024-06-12T21:38:32.130395Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"633.149372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk\" ","response":"range_response_count:1 size:4291"}
	{"level":"info","ts":"2024-06-12T21:38:32.130445Z","caller":"traceutil/trace.go:171","msg":"trace[118155624] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk; range_end:; response_count:1; response_revision:616; }","duration":"633.21802ms","start":"2024-06-12T21:38:31.497219Z","end":"2024-06-12T21:38:32.130437Z","steps":["trace[118155624] 'agreement among raft nodes before linearized reading'  (duration: 633.148542ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T21:38:32.13047Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T21:38:31.497206Z","time spent":"633.257969ms","remote":"127.0.0.1:34158","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4314,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk\" "}
	{"level":"warn","ts":"2024-06-12T21:38:32.130682Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"588.698045ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-12T21:38:32.130725Z","caller":"traceutil/trace.go:171","msg":"trace[840756090] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:616; }","duration":"588.767089ms","start":"2024-06-12T21:38:31.541949Z","end":"2024-06-12T21:38:32.130716Z","steps":["trace[840756090] 'agreement among raft nodes before linearized reading'  (duration: 588.714723ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T21:38:32.130749Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T21:38:31.541932Z","time spent":"588.81189ms","remote":"127.0.0.1:33936","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-06-12T21:38:32.256839Z","caller":"traceutil/trace.go:171","msg":"trace[693569229] linearizableReadLoop","detail":"{readStateIndex:655; appliedIndex:654; }","duration":"119.130643ms","start":"2024-06-12T21:38:32.137689Z","end":"2024-06-12T21:38:32.256819Z","steps":["trace[693569229] 'read index received'  (duration: 116.972605ms)","trace[693569229] 'applied index is now lower than readState.Index'  (duration: 2.157264ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T21:38:32.25716Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.447154ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-376087\" ","response":"range_response_count:1 size:5801"}
	{"level":"info","ts":"2024-06-12T21:38:32.257223Z","caller":"traceutil/trace.go:171","msg":"trace[1571614967] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-376087; range_end:; response_count:1; response_revision:617; }","duration":"119.537407ms","start":"2024-06-12T21:38:32.137674Z","end":"2024-06-12T21:38:32.257211Z","steps":["trace[1571614967] 'agreement among raft nodes before linearized reading'  (duration: 119.28556ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T21:38:32.257493Z","caller":"traceutil/trace.go:171","msg":"trace[1073083622] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"121.161057ms","start":"2024-06-12T21:38:32.136318Z","end":"2024-06-12T21:38:32.257479Z","steps":["trace[1073083622] 'process raft request'  (duration: 118.388483ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T21:38:58.539665Z","caller":"traceutil/trace.go:171","msg":"trace[255011594] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"130.768276ms","start":"2024-06-12T21:38:58.408881Z","end":"2024-06-12T21:38:58.539649Z","steps":["trace[255011594] 'process raft request'  (duration: 130.526182ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T21:47:55.996594Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2024-06-12T21:47:56.00618Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":841,"took":"9.037743ms","hash":2086884593,"current-db-size-bytes":2600960,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2600960,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-06-12T21:47:56.006274Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2086884593,"revision":841,"compact-revision":-1}
	
	
	==> kernel <==
	 21:51:28 up 14 min,  0 users,  load average: 0.16, 0.12, 0.09
	Linux default-k8s-diff-port-376087 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] <==
	I0612 21:45:58.342765       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:47:57.342356       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:47:57.342471       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0612 21:47:58.343193       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:47:58.343280       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:47:58.343287       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:47:58.343210       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:47:58.343313       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:47:58.344522       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:48:58.343859       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:48:58.344118       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:48:58.344173       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:48:58.344987       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:48:58.345036       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:48:58.346302       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:50:58.345012       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:50:58.345210       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:50:58.345222       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:50:58.347229       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:50:58.347320       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:50:58.347352       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] <==
	I0612 21:45:40.942565       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:46:10.435023       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:46:10.950229       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:46:40.442165       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:46:40.957782       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:47:10.447152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:47:10.965576       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:47:40.452391       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:47:40.972660       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:48:10.457462       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:48:10.980456       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:48:40.463174       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:48:40.988825       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:49:10.468204       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:49:10.997454       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0612 21:49:14.009618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="620.853µs"
	I0612 21:49:27.013272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="111.316µs"
	E0612 21:49:40.473538       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:49:41.006944       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:50:10.479211       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:50:11.013742       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:50:40.484669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:50:41.021633       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:51:10.489159       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:51:11.028688       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] <==
	I0612 21:37:58.800396       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:37:58.821980       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.80"]
	I0612 21:37:58.869431       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:37:58.869486       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:37:58.869502       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:37:58.872020       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:37:58.872274       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:37:58.872306       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:37:58.873924       1 config.go:192] "Starting service config controller"
	I0612 21:37:58.875128       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:37:58.875258       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:37:58.875281       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:37:58.877112       1 config.go:319] "Starting node config controller"
	I0612 21:37:58.877137       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:37:58.975438       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 21:37:58.978166       1 shared_informer.go:320] Caches are synced for node config
	I0612 21:37:58.978263       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] <==
	I0612 21:37:54.590891       1 serving.go:380] Generated self-signed cert in-memory
	W0612 21:37:57.303482       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0612 21:37:57.307173       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 21:37:57.307219       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0612 21:37:57.307228       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 21:37:57.395153       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 21:37:57.395241       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:37:57.399688       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 21:37:57.399723       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 21:37:57.400309       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 21:37:57.400856       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 21:37:57.500500       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:49:00 default-k8s-diff-port-376087 kubelet[942]: E0612 21:49:00.016375     942 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 12 21:49:00 default-k8s-diff-port-376087 kubelet[942]: E0612 21:49:00.016806     942 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 12 21:49:00 default-k8s-diff-port-376087 kubelet[942]: E0612 21:49:00.017449     942 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvkf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-xj4xk_kube-system(d3ac0cb2-602d-489c-baeb-fa9a363de8af): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 12 21:49:00 default-k8s-diff-port-376087 kubelet[942]: E0612 21:49:00.017721     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:49:13 default-k8s-diff-port-376087 kubelet[942]: E0612 21:49:13.991640     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:49:26 default-k8s-diff-port-376087 kubelet[942]: E0612 21:49:26.992650     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:49:39 default-k8s-diff-port-376087 kubelet[942]: E0612 21:49:39.991245     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:49:52 default-k8s-diff-port-376087 kubelet[942]: E0612 21:49:52.993380     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:49:53 default-k8s-diff-port-376087 kubelet[942]: E0612 21:49:53.022647     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:49:53 default-k8s-diff-port-376087 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:49:53 default-k8s-diff-port-376087 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:49:53 default-k8s-diff-port-376087 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:49:53 default-k8s-diff-port-376087 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:50:03 default-k8s-diff-port-376087 kubelet[942]: E0612 21:50:03.991501     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:50:16 default-k8s-diff-port-376087 kubelet[942]: E0612 21:50:16.991434     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:50:30 default-k8s-diff-port-376087 kubelet[942]: E0612 21:50:30.991100     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:50:44 default-k8s-diff-port-376087 kubelet[942]: E0612 21:50:44.992491     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:50:53 default-k8s-diff-port-376087 kubelet[942]: E0612 21:50:53.021662     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:50:53 default-k8s-diff-port-376087 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:50:53 default-k8s-diff-port-376087 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:50:53 default-k8s-diff-port-376087 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:50:53 default-k8s-diff-port-376087 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:50:58 default-k8s-diff-port-376087 kubelet[942]: E0612 21:50:58.993532     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:51:12 default-k8s-diff-port-376087 kubelet[942]: E0612 21:51:12.991853     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:51:24 default-k8s-diff-port-376087 kubelet[942]: E0612 21:51:24.992394     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	
	
	==> storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] <==
	I0612 21:38:29.302536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0612 21:38:29.316256       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0612 21:38:29.316360       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0612 21:38:46.719855       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0612 21:38:46.720127       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-376087_6c5e4abe-2bbe-4ec1-b343-97a3ac787a86!
	I0612 21:38:46.720751       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a4eeb3f-de04-466b-82c0-44d5f3aabecc", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-376087_6c5e4abe-2bbe-4ec1-b343-97a3ac787a86 became leader
	I0612 21:38:46.820391       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-376087_6c5e4abe-2bbe-4ec1-b343-97a3ac787a86!
	
	
	==> storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] <==
	I0612 21:37:58.538903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0612 21:38:28.546122       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-376087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-xj4xk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-376087 describe pod metrics-server-569cc877fc-xj4xk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-376087 describe pod metrics-server-569cc877fc-xj4xk: exit status 1 (63.151654ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-xj4xk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-376087 describe pod metrics-server-569cc877fc-xj4xk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-591460 -n embed-certs-591460
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-12 21:52:30.447353386 +0000 UTC m=+6102.061803763
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-591460 -n embed-certs-591460
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-591460 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-591460 logs -n 25: (2.140757575s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| delete  | -p bridge-701638                                       | bridge-701638                | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-576552 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | disable-driver-mounts-576552                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-087875             | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-376087  | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-591460            | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-983302        | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-087875                  | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-376087       | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:42 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-591460                 | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-983302             | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:33:52
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:33:52.855557   80762 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:33:52.855829   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.855839   80762 out.go:304] Setting ErrFile to fd 2...
	I0612 21:33:52.855845   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.856037   80762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:33:52.856582   80762 out.go:298] Setting JSON to false
	I0612 21:33:52.857472   80762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8178,"bootTime":1718219855,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:33:52.857527   80762 start.go:139] virtualization: kvm guest
	I0612 21:33:52.859369   80762 out.go:177] * [old-k8s-version-983302] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:33:52.860886   80762 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:33:52.860907   80762 notify.go:220] Checking for updates...
	I0612 21:33:52.862185   80762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:33:52.863642   80762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:33:52.865031   80762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:33:52.866306   80762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:33:52.867535   80762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:33:52.869148   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:33:52.869530   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.869597   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.884278   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0612 21:33:52.884743   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.885211   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.885234   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.885575   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.885768   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.887577   80762 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0612 21:33:52.888972   80762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:33:52.889265   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.889296   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.903649   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
	I0612 21:33:52.904087   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.904500   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.904518   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.904831   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.904988   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.939030   80762 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:33:52.940484   80762 start.go:297] selected driver: kvm2
	I0612 21:33:52.940497   80762 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.940622   80762 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:33:52.941314   80762 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.941389   80762 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:33:52.956273   80762 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:33:52.956646   80762 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:33:52.956674   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:33:52.956682   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:33:52.956715   80762 start.go:340] cluster config:
	{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.956828   80762 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.958634   80762 out.go:177] * Starting "old-k8s-version-983302" primary control-plane node in "old-k8s-version-983302" cluster
	I0612 21:33:52.959924   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:33:52.959963   80762 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0612 21:33:52.959970   80762 cache.go:56] Caching tarball of preloaded images
	I0612 21:33:52.960065   80762 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:33:52.960079   80762 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0612 21:33:52.960190   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:33:52.960397   80762 start.go:360] acquireMachinesLock for old-k8s-version-983302: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:33:57.423439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:00.495475   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:06.575478   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:09.647560   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:15.727510   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:18.799491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:24.879423   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:27.951495   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:34.031457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:37.103569   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:43.183470   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:46.255491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:52.335452   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:55.407544   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:01.487489   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:04.559546   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:10.639492   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:13.711372   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:19.791460   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:22.863455   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:28.943506   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:32.015443   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:38.095436   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:41.167526   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:47.247485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:50.319435   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:56.399471   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:59.471485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:05.551493   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:08.623467   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:14.703401   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:17.775479   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:23.855516   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:26.927418   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:33.007439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:36.079449   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:42.159480   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:45.231482   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:51.311424   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:54.383524   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:00.463466   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:03.535465   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:09.615457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:12.687462   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:18.767463   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:21.839431   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:24.843967   80243 start.go:364] duration metric: took 4m34.377488728s to acquireMachinesLock for "default-k8s-diff-port-376087"
	I0612 21:37:24.844034   80243 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:24.844046   80243 fix.go:54] fixHost starting: 
	I0612 21:37:24.844649   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:24.844689   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:24.859743   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0612 21:37:24.860227   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:24.860659   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:37:24.860680   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:24.861055   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:24.861352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:24.861550   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:37:24.863507   80243 fix.go:112] recreateIfNeeded on default-k8s-diff-port-376087: state=Stopped err=<nil>
	I0612 21:37:24.863538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	W0612 21:37:24.863708   80243 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:24.865564   80243 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-376087" ...
	I0612 21:37:24.866899   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Start
	I0612 21:37:24.867064   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring networks are active...
	I0612 21:37:24.867951   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network default is active
	I0612 21:37:24.868390   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network mk-default-k8s-diff-port-376087 is active
	I0612 21:37:24.868746   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Getting domain xml...
	I0612 21:37:24.869408   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Creating domain...
	I0612 21:37:24.841481   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:24.841529   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.841912   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:37:24.841938   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.842149   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:37:24.843818   80157 machine.go:97] duration metric: took 4m37.413209096s to provisionDockerMachine
	I0612 21:37:24.843853   80157 fix.go:56] duration metric: took 4m37.434262933s for fixHost
	I0612 21:37:24.843860   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 4m37.434303466s
	W0612 21:37:24.843897   80157 start.go:713] error starting host: provision: host is not running
	W0612 21:37:24.843971   80157 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0612 21:37:24.843980   80157 start.go:728] Will try again in 5 seconds ...
	I0612 21:37:26.077364   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting to get IP...
	I0612 21:37:26.078173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078646   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078686   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.078611   81491 retry.go:31] will retry after 224.429366ms: waiting for machine to come up
	I0612 21:37:26.305227   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305668   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305699   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.305627   81491 retry.go:31] will retry after 298.325251ms: waiting for machine to come up
	I0612 21:37:26.605155   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605587   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605622   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.605558   81491 retry.go:31] will retry after 327.789765ms: waiting for machine to come up
	I0612 21:37:26.935066   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935536   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935567   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.935477   81491 retry.go:31] will retry after 381.56012ms: waiting for machine to come up
	I0612 21:37:27.319036   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.319429   81491 retry.go:31] will retry after 474.663822ms: waiting for machine to come up
	I0612 21:37:27.796149   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796596   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796635   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.796564   81491 retry.go:31] will retry after 943.868595ms: waiting for machine to come up
	I0612 21:37:28.741715   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742226   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:28.742180   81491 retry.go:31] will retry after 1.014472282s: waiting for machine to come up
	I0612 21:37:29.758384   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758928   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758947   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:29.758867   81491 retry.go:31] will retry after 971.872729ms: waiting for machine to come up
	I0612 21:37:29.845647   80157 start.go:360] acquireMachinesLock for no-preload-087875: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:37:30.732362   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732794   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:30.732742   81491 retry.go:31] will retry after 1.352202491s: waiting for machine to come up
	I0612 21:37:32.087272   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087702   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087726   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:32.087663   81491 retry.go:31] will retry after 2.276552983s: waiting for machine to come up
	I0612 21:37:34.367159   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367579   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367613   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:34.367520   81491 retry.go:31] will retry after 1.785262755s: waiting for machine to come up
	I0612 21:37:36.154927   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155412   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:36.155357   81491 retry.go:31] will retry after 3.309693081s: waiting for machine to come up
	I0612 21:37:39.468800   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469443   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469469   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:39.469393   81491 retry.go:31] will retry after 4.284995408s: waiting for machine to come up
	I0612 21:37:45.096430   80404 start.go:364] duration metric: took 4m40.295909999s to acquireMachinesLock for "embed-certs-591460"
	I0612 21:37:45.096485   80404 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:45.096490   80404 fix.go:54] fixHost starting: 
	I0612 21:37:45.096932   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:45.096972   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:45.113819   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
	I0612 21:37:45.114290   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:45.114823   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:37:45.114843   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:45.115208   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:45.115415   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:37:45.115578   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:37:45.117131   80404 fix.go:112] recreateIfNeeded on embed-certs-591460: state=Stopped err=<nil>
	I0612 21:37:45.117156   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	W0612 21:37:45.117324   80404 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:45.119535   80404 out.go:177] * Restarting existing kvm2 VM for "embed-certs-591460" ...
	I0612 21:37:43.759195   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759548   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Found IP for machine: 192.168.61.80
	I0612 21:37:43.759575   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has current primary IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759583   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserving static IP address...
	I0612 21:37:43.760031   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserved static IP address: 192.168.61.80
	I0612 21:37:43.760063   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.760075   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for SSH to be available...
	I0612 21:37:43.760120   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | skip adding static IP to network mk-default-k8s-diff-port-376087 - found existing host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"}
	I0612 21:37:43.760134   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Getting to WaitForSSH function...
	I0612 21:37:43.762259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.762626   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762741   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH client type: external
	I0612 21:37:43.762771   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa (-rw-------)
	I0612 21:37:43.762804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:37:43.762842   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | About to run SSH command:
	I0612 21:37:43.762860   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | exit 0
	I0612 21:37:43.891446   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | SSH cmd err, output: <nil>: 
	I0612 21:37:43.891831   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetConfigRaw
	I0612 21:37:43.892485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:43.895220   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895625   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.895656   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895928   80243 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/config.json ...
	I0612 21:37:43.896140   80243 machine.go:94] provisionDockerMachine start ...
	I0612 21:37:43.896161   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:43.896388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:43.898898   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899317   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.899346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899539   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:43.899727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.899868   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.900019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:43.900171   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:43.900360   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:43.900371   80243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:37:44.016295   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:37:44.016327   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016577   80243 buildroot.go:166] provisioning hostname "default-k8s-diff-port-376087"
	I0612 21:37:44.016602   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.019396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019732   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.019763   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019881   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.020084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020214   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020418   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.020612   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.020803   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.020820   80243 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376087 && echo "default-k8s-diff-port-376087" | sudo tee /etc/hostname
	I0612 21:37:44.146019   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376087
	
	I0612 21:37:44.146049   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.148758   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149204   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.149238   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149356   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.149538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149873   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.150013   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.150187   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.150204   80243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376087/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:37:44.272821   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:44.272852   80243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:37:44.272887   80243 buildroot.go:174] setting up certificates
	I0612 21:37:44.272895   80243 provision.go:84] configureAuth start
	I0612 21:37:44.272903   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.273185   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:44.275991   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276337   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.276366   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276591   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.279011   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279370   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.279396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279521   80243 provision.go:143] copyHostCerts
	I0612 21:37:44.279576   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:37:44.279585   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:37:44.279649   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:37:44.279740   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:37:44.279748   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:37:44.279770   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:37:44.279828   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:37:44.279835   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:37:44.279855   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:37:44.279914   80243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376087 san=[127.0.0.1 192.168.61.80 default-k8s-diff-port-376087 localhost minikube]
	I0612 21:37:44.410909   80243 provision.go:177] copyRemoteCerts
	I0612 21:37:44.410974   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:37:44.410999   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.413740   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414140   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.414173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414406   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.414597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.414759   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.414904   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.501641   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:37:44.526082   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0612 21:37:44.549455   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:37:44.572447   80243 provision.go:87] duration metric: took 299.539656ms to configureAuth
	I0612 21:37:44.572473   80243 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:37:44.572632   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:37:44.572731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.575518   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.575913   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.575948   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.576170   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.576383   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576553   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576754   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.576913   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.577134   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.577155   80243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:37:44.851891   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:37:44.851922   80243 machine.go:97] duration metric: took 955.766062ms to provisionDockerMachine
	I0612 21:37:44.851936   80243 start.go:293] postStartSetup for "default-k8s-diff-port-376087" (driver="kvm2")
	I0612 21:37:44.851951   80243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:37:44.851970   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:44.852318   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:37:44.852352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.855231   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855556   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.855595   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.855935   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.856127   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.856260   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.941821   80243 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:37:44.946013   80243 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:37:44.946052   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:37:44.946120   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:37:44.946200   80243 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:37:44.946281   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:37:44.955467   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:44.979379   80243 start.go:296] duration metric: took 127.428385ms for postStartSetup
	I0612 21:37:44.979421   80243 fix.go:56] duration metric: took 20.135375416s for fixHost
	I0612 21:37:44.979445   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.981891   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.982287   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982520   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.982713   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.982920   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.983040   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.983220   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.983450   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.983467   80243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:37:45.096266   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228265.072559389
	
	I0612 21:37:45.096288   80243 fix.go:216] guest clock: 1718228265.072559389
	I0612 21:37:45.096295   80243 fix.go:229] Guest: 2024-06-12 21:37:45.072559389 +0000 UTC Remote: 2024-06-12 21:37:44.979426071 +0000 UTC m=+294.653210040 (delta=93.133318ms)
	I0612 21:37:45.096313   80243 fix.go:200] guest clock delta is within tolerance: 93.133318ms
	I0612 21:37:45.096318   80243 start.go:83] releasing machines lock for "default-k8s-diff-port-376087", held for 20.252307995s
	I0612 21:37:45.096346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.096683   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:45.099332   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099761   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.099805   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099902   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100560   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100841   80243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:37:45.100880   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.100981   80243 ssh_runner.go:195] Run: cat /version.json
	I0612 21:37:45.101007   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.103590   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.103774   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104052   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104186   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104202   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104210   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104417   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104430   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104650   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104651   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104837   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104852   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.104993   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.208199   80243 ssh_runner.go:195] Run: systemctl --version
	I0612 21:37:45.214375   80243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:37:45.370991   80243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:37:45.378676   80243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:37:45.378744   80243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:37:45.400622   80243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:37:45.400642   80243 start.go:494] detecting cgroup driver to use...
	I0612 21:37:45.400709   80243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:37:45.416775   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:37:45.430261   80243 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:37:45.430314   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:37:45.445482   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:37:45.461471   80243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:37:45.578411   80243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:37:45.750493   80243 docker.go:233] disabling docker service ...
	I0612 21:37:45.750556   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:37:45.769072   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:37:45.784755   80243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:37:45.907970   80243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:37:46.031847   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:37:46.046473   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:37:46.067764   80243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:37:46.067813   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.080604   80243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:37:46.080660   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.093611   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.104443   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.117070   80243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:37:46.128759   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.139977   80243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.157893   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.168896   80243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:37:46.179765   80243 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:37:46.179816   80243 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:37:46.194059   80243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:37:46.205474   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:46.322562   80243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:37:46.479073   80243 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:37:46.479149   80243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:37:46.484557   80243 start.go:562] Will wait 60s for crictl version
	I0612 21:37:46.484609   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:37:46.488403   80243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:37:46.529210   80243 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:37:46.529301   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.561476   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.594477   80243 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:37:45.120900   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Start
	I0612 21:37:45.121084   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring networks are active...
	I0612 21:37:45.121776   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network default is active
	I0612 21:37:45.122108   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network mk-embed-certs-591460 is active
	I0612 21:37:45.122554   80404 main.go:141] libmachine: (embed-certs-591460) Getting domain xml...
	I0612 21:37:45.123260   80404 main.go:141] libmachine: (embed-certs-591460) Creating domain...
	I0612 21:37:46.357867   80404 main.go:141] libmachine: (embed-certs-591460) Waiting to get IP...
	I0612 21:37:46.358704   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.359164   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.359265   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.359144   81627 retry.go:31] will retry after 278.948395ms: waiting for machine to come up
	I0612 21:37:46.639971   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.640491   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.640523   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.640433   81627 retry.go:31] will retry after 342.550517ms: waiting for machine to come up
	I0612 21:37:46.985065   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.985590   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.985618   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.985548   81627 retry.go:31] will retry after 297.683214ms: waiting for machine to come up
	I0612 21:37:47.285192   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.285650   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.285688   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.285615   81627 retry.go:31] will retry after 415.994572ms: waiting for machine to come up
	I0612 21:37:47.702894   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.703398   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.703424   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.703353   81627 retry.go:31] will retry after 672.441633ms: waiting for machine to come up
	I0612 21:37:48.377227   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:48.377772   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:48.377802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:48.377735   81627 retry.go:31] will retry after 790.165478ms: waiting for machine to come up
	I0612 21:37:49.169651   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:49.170194   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:49.170224   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:49.170134   81627 retry.go:31] will retry after 953.609739ms: waiting for machine to come up
	I0612 21:37:46.595772   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:46.599221   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599682   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:46.599712   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599919   80243 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0612 21:37:46.604573   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:46.617274   80243 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:37:46.617388   80243 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:37:46.617443   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:46.663227   80243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:37:46.663306   80243 ssh_runner.go:195] Run: which lz4
	I0612 21:37:46.667878   80243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:37:46.672384   80243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:37:46.672416   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:37:48.195844   80243 crio.go:462] duration metric: took 1.527996646s to copy over tarball
	I0612 21:37:48.195908   80243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:37:50.125800   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:50.126305   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:50.126337   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:50.126260   81627 retry.go:31] will retry after 938.251336ms: waiting for machine to come up
	I0612 21:37:51.065851   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:51.066225   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:51.066247   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:51.066194   81627 retry.go:31] will retry after 1.635454683s: waiting for machine to come up
	I0612 21:37:52.704193   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:52.704663   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:52.704687   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:52.704633   81627 retry.go:31] will retry after 1.56455027s: waiting for machine to come up
	I0612 21:37:54.271391   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:54.271873   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:54.271919   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:54.271826   81627 retry.go:31] will retry after 2.052574222s: waiting for machine to come up
	I0612 21:37:50.464553   80243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.268615304s)
	I0612 21:37:50.464601   80243 crio.go:469] duration metric: took 2.268715227s to extract the tarball
	I0612 21:37:50.464612   80243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:37:50.502406   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:50.550796   80243 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:37:50.550821   80243 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:37:50.550831   80243 kubeadm.go:928] updating node { 192.168.61.80 8444 v1.30.1 crio true true} ...
	I0612 21:37:50.550957   80243 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:37:50.551042   80243 ssh_runner.go:195] Run: crio config
	I0612 21:37:50.603232   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:50.603256   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:50.603268   80243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:37:50.603299   80243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.80 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376087 NodeName:default-k8s-diff-port-376087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:37:50.603459   80243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.80
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-376087"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:37:50.603524   80243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:37:50.614003   80243 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:37:50.614082   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:37:50.623416   80243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0612 21:37:50.640203   80243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:37:50.656668   80243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0612 21:37:50.674601   80243 ssh_runner.go:195] Run: grep 192.168.61.80	control-plane.minikube.internal$ /etc/hosts
	I0612 21:37:50.678858   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:50.692389   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:50.822225   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:37:50.840703   80243 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087 for IP: 192.168.61.80
	I0612 21:37:50.840734   80243 certs.go:194] generating shared ca certs ...
	I0612 21:37:50.840758   80243 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:37:50.840936   80243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:37:50.840986   80243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:37:50.840999   80243 certs.go:256] generating profile certs ...
	I0612 21:37:50.841133   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/client.key
	I0612 21:37:50.841200   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key.0afce446
	I0612 21:37:50.841238   80243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key
	I0612 21:37:50.841357   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:37:50.841398   80243 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:37:50.841409   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:37:50.841438   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:37:50.841469   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:37:50.841489   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:37:50.841529   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:50.842311   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:37:50.880075   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:37:50.914504   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:37:50.945724   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:37:50.975702   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0612 21:37:51.009817   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:37:51.039086   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:37:51.064146   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:37:51.088483   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:37:51.112785   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:37:51.136192   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:37:51.159239   80243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:37:51.175719   80243 ssh_runner.go:195] Run: openssl version
	I0612 21:37:51.181707   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:37:51.193498   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198415   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198475   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.204601   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:37:51.216354   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:37:51.231979   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.236952   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.237018   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.243461   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:37:51.258481   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:37:51.273412   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279356   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279420   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.285551   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:37:51.298066   80243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:37:51.302791   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:37:51.309402   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:37:51.316170   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:37:51.322785   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:37:51.329066   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:37:51.335031   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:37:51.340945   80243 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:37:51.341082   80243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:37:51.341143   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.383011   80243 cri.go:89] found id: ""
	I0612 21:37:51.383134   80243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:37:51.394768   80243 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:37:51.394794   80243 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:37:51.394800   80243 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:37:51.394852   80243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:37:51.408147   80243 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:37:51.409094   80243 kubeconfig.go:125] found "default-k8s-diff-port-376087" server: "https://192.168.61.80:8444"
	I0612 21:37:51.411221   80243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:37:51.421897   80243 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.80
	I0612 21:37:51.421934   80243 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:37:51.421949   80243 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:37:51.422029   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.470321   80243 cri.go:89] found id: ""
	I0612 21:37:51.470441   80243 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:37:51.488369   80243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:37:51.498367   80243 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:37:51.498388   80243 kubeadm.go:156] found existing configuration files:
	
	I0612 21:37:51.498449   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0612 21:37:51.510212   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:37:51.510287   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:37:51.520231   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0612 21:37:51.529270   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:37:51.529339   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:37:51.538902   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.548593   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:37:51.548652   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.558533   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0612 21:37:51.567995   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:37:51.568063   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:37:51.577695   80243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:37:51.587794   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:51.718155   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.602448   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.820456   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.901167   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.977502   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:37:52.977606   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.477802   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.977879   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.995753   80243 api_server.go:72] duration metric: took 1.018251882s to wait for apiserver process to appear ...
	I0612 21:37:53.995788   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:37:53.995812   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:53.996308   80243 api_server.go:269] stopped: https://192.168.61.80:8444/healthz: Get "https://192.168.61.80:8444/healthz": dial tcp 192.168.61.80:8444: connect: connection refused
	I0612 21:37:54.496045   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.293362   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:37:57.293394   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:37:57.293408   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.395854   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.395886   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.496122   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.505090   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.505124   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.996334   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.000606   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:58.000646   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:58.496177   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.504422   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:37:58.513123   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:37:58.513150   80243 api_server.go:131] duration metric: took 4.517354722s to wait for apiserver health ...
	I0612 21:37:58.513158   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:58.513163   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:58.514696   80243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:37:56.325937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:56.326316   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:56.326343   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:56.326261   81627 retry.go:31] will retry after 3.51636746s: waiting for machine to come up
	I0612 21:37:58.516091   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:37:58.541034   80243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:37:58.585635   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:37:58.596829   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:37:58.596859   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:37:58.596867   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:37:58.596873   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:37:58.596883   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:37:58.596888   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 21:37:58.596893   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:37:58.596899   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:37:58.596904   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:37:58.596910   80243 system_pods.go:74] duration metric: took 11.248328ms to wait for pod list to return data ...
	I0612 21:37:58.596917   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:37:58.600081   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:37:58.600107   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:37:58.600119   80243 node_conditions.go:105] duration metric: took 3.197181ms to run NodePressure ...
	I0612 21:37:58.600134   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:58.911963   80243 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918455   80243 kubeadm.go:733] kubelet initialised
	I0612 21:37:58.918475   80243 kubeadm.go:734] duration metric: took 6.490654ms waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918482   80243 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:37:58.924427   80243 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.930290   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930329   80243 pod_ready.go:81] duration metric: took 5.86525ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.930339   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930346   80243 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.935394   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935416   80243 pod_ready.go:81] duration metric: took 5.061639ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.935426   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935431   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.940238   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940268   80243 pod_ready.go:81] duration metric: took 4.829842ms for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.940286   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940295   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.989649   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989686   80243 pod_ready.go:81] duration metric: took 49.380431ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.989702   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989711   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.389868   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389903   80243 pod_ready.go:81] duration metric: took 400.174877ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.389912   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389918   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.790398   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790425   80243 pod_ready.go:81] duration metric: took 400.499157ms for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.790435   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790449   80243 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:00.189506   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189533   80243 pod_ready.go:81] duration metric: took 399.075983ms for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:00.189551   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189559   80243 pod_ready.go:38] duration metric: took 1.271068537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:00.189574   80243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:38:00.201480   80243 ops.go:34] apiserver oom_adj: -16
	I0612 21:38:00.201504   80243 kubeadm.go:591] duration metric: took 8.806697524s to restartPrimaryControlPlane
	I0612 21:38:00.201514   80243 kubeadm.go:393] duration metric: took 8.860579681s to StartCluster
	I0612 21:38:00.201536   80243 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.201601   80243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:00.203106   80243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.203416   80243 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:38:00.205568   80243 out.go:177] * Verifying Kubernetes components...
	I0612 21:38:00.203448   80243 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:38:00.203614   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:00.207110   80243 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207120   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:00.207120   80243 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207143   80243 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207166   80243 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207193   80243 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:38:00.207187   80243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376087"
	I0612 21:38:00.207208   80243 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207222   80243 addons.go:243] addon metrics-server should already be in state true
	I0612 21:38:00.207230   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207263   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207490   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207511   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207519   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207544   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207553   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207572   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.222521   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0612 21:38:00.222979   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.223496   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.223523   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.223899   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.224519   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.224555   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.227511   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0612 21:38:00.227543   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33041
	I0612 21:38:00.227874   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.227930   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.228402   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228409   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228426   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228471   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228776   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228780   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228952   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.229291   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.229323   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.232640   80243 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.232662   80243 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:38:00.232690   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.233072   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.233103   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.240883   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38355
	I0612 21:38:00.241363   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.241839   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.241861   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.242217   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.242434   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.244544   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.244604   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0612 21:38:00.246924   80243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:38:00.244915   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.248406   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:38:00.248430   80243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:38:00.248451   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.248861   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.248887   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.249211   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.249431   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.251070   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.251137   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0612 21:38:00.252729   80243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:00.251644   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.252033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.252601   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.254033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.254079   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.254111   80243 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.254127   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:38:00.254148   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.254211   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.254399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.254515   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.254542   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.254712   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.254926   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.256878   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.256948   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.257836   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258073   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.258105   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.258993   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.259141   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.259283   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.272822   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I0612 21:38:00.273238   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.273710   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.273734   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.274221   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.274411   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.276056   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.276286   80243 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.276302   80243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:38:00.276323   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.279285   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279351   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.279400   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.279675   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.279809   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.279940   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.392656   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:00.411972   80243 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:00.502108   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.504572   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:38:00.504590   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:38:00.522021   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:38:00.522057   80243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:38:00.538366   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.541981   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:00.541999   80243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:38:00.561335   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:01.519955   80243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017815416s)
	I0612 21:38:01.520006   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520087   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520100   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520312   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520334   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520343   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520350   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520422   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520435   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520444   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520452   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520554   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520573   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520647   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520678   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.520680   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.528807   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.528827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.529143   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.529162   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.529166   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556376   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.556701   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556750   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.556762   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.556780   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556791   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.557157   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.557179   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.557190   80243 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-376087"
	I0612 21:38:01.559103   80243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:37:59.844024   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:59.844481   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:59.844505   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:59.844433   81627 retry.go:31] will retry after 3.77902453s: waiting for machine to come up
	I0612 21:38:03.626861   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627380   80404 main.go:141] libmachine: (embed-certs-591460) Found IP for machine: 192.168.39.147
	I0612 21:38:03.627399   80404 main.go:141] libmachine: (embed-certs-591460) Reserving static IP address...
	I0612 21:38:03.627416   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has current primary IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627917   80404 main.go:141] libmachine: (embed-certs-591460) Reserved static IP address: 192.168.39.147
	I0612 21:38:03.627964   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.627981   80404 main.go:141] libmachine: (embed-certs-591460) Waiting for SSH to be available...
	I0612 21:38:03.628011   80404 main.go:141] libmachine: (embed-certs-591460) DBG | skip adding static IP to network mk-embed-certs-591460 - found existing host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"}
	I0612 21:38:03.628030   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Getting to WaitForSSH function...
	I0612 21:38:03.630082   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630480   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.630581   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630762   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH client type: external
	I0612 21:38:03.630802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa (-rw-------)
	I0612 21:38:03.630846   80404 main.go:141] libmachine: (embed-certs-591460) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:03.630872   80404 main.go:141] libmachine: (embed-certs-591460) DBG | About to run SSH command:
	I0612 21:38:03.630882   80404 main.go:141] libmachine: (embed-certs-591460) DBG | exit 0
	I0612 21:38:03.755304   80404 main.go:141] libmachine: (embed-certs-591460) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:03.755720   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetConfigRaw
	I0612 21:38:03.756310   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:03.758608   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.758927   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.758966   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.759153   80404 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/config.json ...
	I0612 21:38:03.759390   80404 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:03.759414   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:03.759641   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.761954   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762215   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.762244   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762371   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.762525   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762689   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762842   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.762995   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.763183   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.763206   80404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:03.867900   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:03.867936   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868185   80404 buildroot.go:166] provisioning hostname "embed-certs-591460"
	I0612 21:38:03.868210   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868430   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.871347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871690   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.871721   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871816   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.871977   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872130   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872258   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.872408   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.872588   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.872604   80404 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-591460 && echo "embed-certs-591460" | sudo tee /etc/hostname
	I0612 21:38:03.990526   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-591460
	
	I0612 21:38:03.990550   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.993057   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993458   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.993485   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993646   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.993830   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.993985   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.994125   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.994297   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.994499   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.994524   80404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-591460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-591460/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-591460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:04.120595   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:04.120623   80404 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:04.120640   80404 buildroot.go:174] setting up certificates
	I0612 21:38:04.120650   80404 provision.go:84] configureAuth start
	I0612 21:38:04.120658   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:04.120910   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.123483   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.123854   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.123879   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.124153   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.126901   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127293   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.127318   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127494   80404 provision.go:143] copyHostCerts
	I0612 21:38:04.127554   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:04.127566   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:04.127635   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:04.127736   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:04.127747   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:04.127785   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:04.127860   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:04.127870   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:04.127896   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:04.127960   80404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.embed-certs-591460 san=[127.0.0.1 192.168.39.147 embed-certs-591460 localhost minikube]
	I0612 21:38:04.265296   80404 provision.go:177] copyRemoteCerts
	I0612 21:38:04.265361   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:04.265392   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.267703   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268044   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.268090   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268244   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.268421   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.268583   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.268780   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.349440   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:04.374868   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0612 21:38:04.398419   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:04.423319   80404 provision.go:87] duration metric: took 302.657777ms to configureAuth
	I0612 21:38:04.423353   80404 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:04.423514   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:04.423586   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.426301   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426612   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.426641   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426796   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.426971   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427186   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427331   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.427553   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.427723   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.427739   80404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:04.689161   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:04.689199   80404 machine.go:97] duration metric: took 929.790838ms to provisionDockerMachine
	I0612 21:38:04.689212   80404 start.go:293] postStartSetup for "embed-certs-591460" (driver="kvm2")
	I0612 21:38:04.689223   80404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:04.689242   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.689569   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:04.689616   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.692484   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.692841   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.692864   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.693002   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.693191   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.693326   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.693469   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.923975   80762 start.go:364] duration metric: took 4m11.963543792s to acquireMachinesLock for "old-k8s-version-983302"
	I0612 21:38:04.924056   80762 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:04.924068   80762 fix.go:54] fixHost starting: 
	I0612 21:38:04.924507   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:04.924543   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:04.942022   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0612 21:38:04.942428   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:04.942891   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:38:04.942917   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:04.943345   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:04.943553   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:04.943726   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetState
	I0612 21:38:04.945403   80762 fix.go:112] recreateIfNeeded on old-k8s-version-983302: state=Stopped err=<nil>
	I0612 21:38:04.945427   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	W0612 21:38:04.945581   80762 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:04.947672   80762 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-983302" ...
	I0612 21:38:01.560387   80243 addons.go:510] duration metric: took 1.356939902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:38:02.416070   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.416451   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.774287   80404 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:04.778568   80404 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:04.778596   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:04.778667   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:04.778740   80404 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:04.778819   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:04.788602   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:04.813969   80404 start.go:296] duration metric: took 124.741162ms for postStartSetup
	I0612 21:38:04.814020   80404 fix.go:56] duration metric: took 19.717527303s for fixHost
	I0612 21:38:04.814049   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.816907   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817268   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.817294   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817511   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.817728   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.817905   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.818087   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.818293   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.818501   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.818516   80404 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:04.923846   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228284.879920542
	
	I0612 21:38:04.923868   80404 fix.go:216] guest clock: 1718228284.879920542
	I0612 21:38:04.923874   80404 fix.go:229] Guest: 2024-06-12 21:38:04.879920542 +0000 UTC Remote: 2024-06-12 21:38:04.814026698 +0000 UTC m=+300.152179547 (delta=65.893844ms)
	I0612 21:38:04.923890   80404 fix.go:200] guest clock delta is within tolerance: 65.893844ms
	I0612 21:38:04.923894   80404 start.go:83] releasing machines lock for "embed-certs-591460", held for 19.827427255s
	I0612 21:38:04.923920   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.924155   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.926708   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927102   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.927146   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927281   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927788   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927955   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.928043   80404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:04.928099   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.928158   80404 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:04.928182   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.930931   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931237   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931377   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931415   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931561   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931587   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931592   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931742   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931790   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.932111   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.932127   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.932250   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:05.009184   80404 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:05.035746   80404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:05.181527   80404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:05.189035   80404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:05.189113   80404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:05.205860   80404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:05.205886   80404 start.go:494] detecting cgroup driver to use...
	I0612 21:38:05.205957   80404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:05.223913   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:05.239598   80404 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:05.239679   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:05.253501   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:05.268094   80404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:05.397260   80404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:05.560454   80404 docker.go:233] disabling docker service ...
	I0612 21:38:05.560532   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:05.579197   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:05.593420   80404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:05.728145   80404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:05.860041   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:05.876025   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:05.895242   80404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:05.895336   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.906575   80404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:05.906662   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.918248   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.929178   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.942169   80404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:05.953542   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.969045   80404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.989509   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:06.001532   80404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:06.012676   80404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:06.012740   80404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:06.030028   80404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:06.048168   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:06.190039   80404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:06.349088   80404 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:06.349151   80404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:06.355251   80404 start.go:562] Will wait 60s for crictl version
	I0612 21:38:06.355321   80404 ssh_runner.go:195] Run: which crictl
	I0612 21:38:06.359456   80404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:06.400450   80404 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:06.400525   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.430078   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.461616   80404 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:04.949078   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .Start
	I0612 21:38:04.949226   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring networks are active...
	I0612 21:38:04.949936   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network default is active
	I0612 21:38:04.950371   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network mk-old-k8s-version-983302 is active
	I0612 21:38:04.950813   80762 main.go:141] libmachine: (old-k8s-version-983302) Getting domain xml...
	I0612 21:38:04.951549   80762 main.go:141] libmachine: (old-k8s-version-983302) Creating domain...
	I0612 21:38:06.296150   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting to get IP...
	I0612 21:38:06.296978   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.297465   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.297570   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.297453   81824 retry.go:31] will retry after 256.609938ms: waiting for machine to come up
	I0612 21:38:06.556307   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.556935   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.556967   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.556884   81824 retry.go:31] will retry after 285.754887ms: waiting for machine to come up
	I0612 21:38:06.844674   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.845227   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.845255   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.845171   81824 retry.go:31] will retry after 326.266367ms: waiting for machine to come up
	I0612 21:38:07.172788   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.173414   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.173447   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.173353   81824 retry.go:31] will retry after 393.443927ms: waiting for machine to come up
	I0612 21:38:07.568084   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.568645   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.568673   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.568609   81824 retry.go:31] will retry after 726.66775ms: waiting for machine to come up
	I0612 21:38:06.462860   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:06.466111   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466521   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:06.466551   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466837   80404 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:06.471361   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:06.485595   80404 kubeadm.go:877] updating cluster {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:06.485718   80404 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:06.485761   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:06.528708   80404 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:06.528778   80404 ssh_runner.go:195] Run: which lz4
	I0612 21:38:06.533340   80404 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:06.538076   80404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:06.538115   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:38:08.078495   80404 crio.go:462] duration metric: took 1.545201872s to copy over tarball
	I0612 21:38:08.078573   80404 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:06.917632   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:07.916734   80243 node_ready.go:49] node "default-k8s-diff-port-376087" has status "Ready":"True"
	I0612 21:38:07.916763   80243 node_ready.go:38] duration metric: took 7.504763576s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:07.916775   80243 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:07.924249   80243 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931751   80243 pod_ready.go:92] pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.931773   80243 pod_ready.go:81] duration metric: took 7.493608ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931782   80243 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937804   80243 pod_ready.go:92] pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.937880   80243 pod_ready.go:81] duration metric: took 6.090191ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937904   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:09.944927   80243 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:08.296811   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.297295   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.297319   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.297250   81824 retry.go:31] will retry after 658.540746ms: waiting for machine to come up
	I0612 21:38:08.957164   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.957611   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.957635   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.957576   81824 retry.go:31] will retry after 921.725713ms: waiting for machine to come up
	I0612 21:38:09.880881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:09.881672   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:09.881703   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:09.881604   81824 retry.go:31] will retry after 1.355846361s: waiting for machine to come up
	I0612 21:38:11.238616   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:11.239058   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:11.239094   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:11.238996   81824 retry.go:31] will retry after 1.3469357s: waiting for machine to come up
	I0612 21:38:12.587245   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:12.587747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:12.587785   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:12.587683   81824 retry.go:31] will retry after 1.616666063s: waiting for machine to come up
	I0612 21:38:10.426384   80404 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.347778968s)
	I0612 21:38:10.426418   80404 crio.go:469] duration metric: took 2.347893056s to extract the tarball
	I0612 21:38:10.426427   80404 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:10.472235   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:10.522846   80404 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:38:10.522869   80404 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:38:10.522876   80404 kubeadm.go:928] updating node { 192.168.39.147 8443 v1.30.1 crio true true} ...
	I0612 21:38:10.523007   80404 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-591460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:10.523163   80404 ssh_runner.go:195] Run: crio config
	I0612 21:38:10.577165   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:10.577193   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:10.577209   80404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:10.577244   80404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-591460 NodeName:embed-certs-591460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:38:10.577400   80404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-591460"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:10.577479   80404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:38:10.587499   80404 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:10.587573   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:10.597410   80404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0612 21:38:10.614617   80404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:10.632222   80404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0612 21:38:10.649693   80404 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:10.653639   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:10.666501   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:10.802679   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:10.820975   80404 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460 for IP: 192.168.39.147
	I0612 21:38:10.821001   80404 certs.go:194] generating shared ca certs ...
	I0612 21:38:10.821022   80404 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:10.821187   80404 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:10.821233   80404 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:10.821243   80404 certs.go:256] generating profile certs ...
	I0612 21:38:10.821326   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/client.key
	I0612 21:38:10.821402   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key.3b2e21e0
	I0612 21:38:10.821440   80404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key
	I0612 21:38:10.821575   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:10.821616   80404 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:10.821626   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:10.821655   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:10.821706   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:10.821751   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:10.821812   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:10.822621   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:10.879261   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:10.924352   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:10.961294   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:10.993792   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0612 21:38:11.039515   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:11.063161   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:11.086759   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:38:11.109693   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:11.133083   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:11.155716   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:11.181860   80404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:11.199989   80404 ssh_runner.go:195] Run: openssl version
	I0612 21:38:11.205811   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:11.216640   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221692   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221754   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.227829   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:11.239918   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:11.251648   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256123   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256176   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.261880   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:11.273184   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:11.284832   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289679   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289732   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.295338   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:11.306317   80404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:11.310737   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:11.320403   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:11.327756   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:11.333976   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:11.340200   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:11.346386   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:11.352268   80404 kubeadm.go:391] StartCluster: {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:11.352385   80404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:11.352435   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.390802   80404 cri.go:89] found id: ""
	I0612 21:38:11.390870   80404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:11.402604   80404 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:11.402626   80404 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:11.402630   80404 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:11.402682   80404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:11.413636   80404 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:11.414999   80404 kubeconfig.go:125] found "embed-certs-591460" server: "https://192.168.39.147:8443"
	I0612 21:38:11.417654   80404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:11.427456   80404 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.147
	I0612 21:38:11.427496   80404 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:11.427509   80404 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:11.427559   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.462135   80404 cri.go:89] found id: ""
	I0612 21:38:11.462211   80404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:11.478193   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:11.488816   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:11.488838   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:11.488899   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:11.498079   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:11.498154   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:11.508044   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:11.519721   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:11.519785   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:11.529554   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.538699   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:11.538750   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.548154   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:11.559980   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:11.560053   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:11.569737   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:11.579812   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:11.703454   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.773142   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069644541s)
	I0612 21:38:12.773183   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.991458   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.080268   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.207751   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:13.207934   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:13.708672   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.208389   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.268408   80404 api_server.go:72] duration metric: took 1.060631955s to wait for apiserver process to appear ...
	I0612 21:38:14.268443   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:38:14.268464   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:14.269096   80404 api_server.go:269] stopped: https://192.168.39.147:8443/healthz: Get "https://192.168.39.147:8443/healthz": dial tcp 192.168.39.147:8443: connect: connection refused
	I0612 21:38:10.445507   80243 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.445530   80243 pod_ready.go:81] duration metric: took 2.50760731s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.445542   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450290   80243 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.450310   80243 pod_ready.go:81] duration metric: took 4.759656ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450323   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454909   80243 pod_ready.go:92] pod "kube-proxy-8lrgv" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.454940   80243 pod_ready.go:81] duration metric: took 4.597123ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454951   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:12.587416   80243 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:13.505858   80243 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:13.505884   80243 pod_ready.go:81] duration metric: took 3.050925673s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:13.505896   80243 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:14.206281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:14.206781   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:14.206810   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:14.206716   81824 retry.go:31] will retry after 2.057638604s: waiting for machine to come up
	I0612 21:38:16.266372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:16.266920   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:16.266955   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:16.266858   81824 retry.go:31] will retry after 2.387834661s: waiting for machine to come up
	I0612 21:38:14.769114   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.056504   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.056539   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.056557   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.075356   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.075391   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.268731   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.277080   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.277111   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:17.768638   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.773438   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.773464   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:18.269037   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:18.273939   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:38:18.286895   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:38:18.286922   80404 api_server.go:131] duration metric: took 4.018473342s to wait for apiserver health ...
	I0612 21:38:18.286931   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:18.286937   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:18.288955   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:38:18.290619   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:38:18.305334   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:38:18.336590   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:38:18.351276   80404 system_pods.go:59] 8 kube-system pods found
	I0612 21:38:18.351320   80404 system_pods.go:61] "coredns-7db6d8ff4d-z99cq" [575689b8-3c51-45c8-874c-481e4b9db39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:38:18.351331   80404 system_pods.go:61] "etcd-embed-certs-591460" [190c1552-6bca-41f2-9ea9-e415e1ae9406] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:38:18.351342   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [c0fed28f-1d80-44eb-a66a-3a5b36704882] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:38:18.351350   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [79758f2a-2517-4a76-a3ae-536ac3adf781] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:38:18.351357   80404 system_pods.go:61] "kube-proxy-79kz5" [74ddb284-7cb2-46ec-ab9f-246dbfa0c4ec] Running
	I0612 21:38:18.351372   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [d9916521-fcc1-4bf1-8b03-8a5553f07bd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:38:18.351383   80404 system_pods.go:61] "metrics-server-569cc877fc-bkhxn" [f78482c8-82ea-4dbd-999f-2e4c73c98b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:38:18.351396   80404 system_pods.go:61] "storage-provisioner" [b3b117f7-ac44-4430-afb4-c6991ce1b71d] Running
	I0612 21:38:18.351407   80404 system_pods.go:74] duration metric: took 14.792966ms to wait for pod list to return data ...
	I0612 21:38:18.351419   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:38:18.357736   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:38:18.357769   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:38:18.357786   80404 node_conditions.go:105] duration metric: took 6.360028ms to run NodePressure ...
	I0612 21:38:18.357805   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:18.634312   80404 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638679   80404 kubeadm.go:733] kubelet initialised
	I0612 21:38:18.638700   80404 kubeadm.go:734] duration metric: took 4.362243ms waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638706   80404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:18.643840   80404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.648561   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648585   80404 pod_ready.go:81] duration metric: took 4.721795ms for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.648597   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648606   80404 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.654013   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654036   80404 pod_ready.go:81] duration metric: took 5.419602ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.654046   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654054   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.659445   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659468   80404 pod_ready.go:81] duration metric: took 5.404211ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.659479   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659487   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.741451   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741480   80404 pod_ready.go:81] duration metric: took 81.981354ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.741489   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741495   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140710   80404 pod_ready.go:92] pod "kube-proxy-79kz5" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:19.140734   80404 pod_ready.go:81] duration metric: took 399.230349ms for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140744   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:15.513300   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.013924   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:20.024841   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.656575   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:18.657074   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:18.657111   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:18.657022   81824 retry.go:31] will retry after 3.518256927s: waiting for machine to come up
	I0612 21:38:22.176416   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176901   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has current primary IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176930   80762 main.go:141] libmachine: (old-k8s-version-983302) Found IP for machine: 192.168.50.81
	I0612 21:38:22.176965   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserving static IP address...
	I0612 21:38:22.177385   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserved static IP address: 192.168.50.81
	I0612 21:38:22.177422   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.177435   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting for SSH to be available...
	I0612 21:38:22.177459   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | skip adding static IP to network mk-old-k8s-version-983302 - found existing host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"}
	I0612 21:38:22.177471   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Getting to WaitForSSH function...
	I0612 21:38:22.179728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180130   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.180158   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180273   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH client type: external
	I0612 21:38:22.180334   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa (-rw-------)
	I0612 21:38:22.180368   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:22.180387   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | About to run SSH command:
	I0612 21:38:22.180399   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | exit 0
	I0612 21:38:22.308621   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:22.308979   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetConfigRaw
	I0612 21:38:22.309620   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.312747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313124   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.313155   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313421   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:38:22.313635   80762 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:22.313658   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:22.313884   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.316476   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.316961   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.317014   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.317221   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.317408   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317600   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317775   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.317955   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.318195   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.318207   80762 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:22.431693   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:22.431728   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.431979   80762 buildroot.go:166] provisioning hostname "old-k8s-version-983302"
	I0612 21:38:22.432006   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.432191   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.434830   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435267   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.435300   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435431   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.435598   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435718   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435826   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.436056   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.436237   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.436252   80762 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-983302 && echo "old-k8s-version-983302" | sudo tee /etc/hostname
	I0612 21:38:22.563119   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-983302
	
	I0612 21:38:22.563184   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.565915   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.566315   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566513   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.566704   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.566885   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.567021   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.567243   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.567463   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.567490   80762 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-983302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-983302/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-983302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:22.690443   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:22.690474   80762 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:22.690494   80762 buildroot.go:174] setting up certificates
	I0612 21:38:22.690504   80762 provision.go:84] configureAuth start
	I0612 21:38:22.690514   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.690774   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.693186   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693528   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.693576   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693689   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.695948   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696285   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.696318   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696432   80762 provision.go:143] copyHostCerts
	I0612 21:38:22.696501   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:22.696521   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:22.696583   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:22.696662   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:22.696671   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:22.696693   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:22.696774   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:22.696784   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:22.696803   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:22.696847   80762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-983302 san=[127.0.0.1 192.168.50.81 localhost minikube old-k8s-version-983302]
	I0612 21:38:23.576378   80157 start.go:364] duration metric: took 53.730674695s to acquireMachinesLock for "no-preload-087875"
	I0612 21:38:23.576429   80157 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:23.576436   80157 fix.go:54] fixHost starting: 
	I0612 21:38:23.576844   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:23.576875   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:23.594879   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I0612 21:38:23.595284   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:23.595811   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:38:23.595836   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:23.596201   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:23.596404   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:23.596559   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:38:23.598372   80157 fix.go:112] recreateIfNeeded on no-preload-087875: state=Stopped err=<nil>
	I0612 21:38:23.598399   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	W0612 21:38:23.598558   80157 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:23.600649   80157 out.go:177] * Restarting existing kvm2 VM for "no-preload-087875" ...
	I0612 21:38:21.147354   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.147393   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:22.863618   80762 provision.go:177] copyRemoteCerts
	I0612 21:38:22.863672   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:22.863698   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.866979   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867371   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.867403   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867548   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.867734   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.867904   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.868126   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:22.958350   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:38:22.984409   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:23.009623   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0612 21:38:23.038026   80762 provision.go:87] duration metric: took 347.510898ms to configureAuth
	I0612 21:38:23.038063   80762 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:23.038309   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:38:23.038390   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.041196   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041634   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.041660   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041842   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.042044   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042222   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042410   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.042580   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.042780   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.042799   80762 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:23.324862   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:23.324893   80762 machine.go:97] duration metric: took 1.01124225s to provisionDockerMachine
	I0612 21:38:23.324904   80762 start.go:293] postStartSetup for "old-k8s-version-983302" (driver="kvm2")
	I0612 21:38:23.324913   80762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:23.324928   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.325240   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:23.325274   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.328007   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328343   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.328372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328578   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.328770   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.328939   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.329068   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.416040   80762 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:23.420586   80762 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:23.420607   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:23.420674   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:23.420739   80762 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:23.420823   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:23.432266   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:23.460619   80762 start.go:296] duration metric: took 135.703593ms for postStartSetup
	I0612 21:38:23.460661   80762 fix.go:56] duration metric: took 18.536593686s for fixHost
	I0612 21:38:23.460684   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.463415   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463745   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.463780   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463909   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.464110   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464248   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464378   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.464533   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.464742   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.464754   80762 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:23.576211   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228303.539451044
	
	I0612 21:38:23.576231   80762 fix.go:216] guest clock: 1718228303.539451044
	I0612 21:38:23.576239   80762 fix.go:229] Guest: 2024-06-12 21:38:23.539451044 +0000 UTC Remote: 2024-06-12 21:38:23.460665921 +0000 UTC m=+270.637213069 (delta=78.785123ms)
	I0612 21:38:23.576285   80762 fix.go:200] guest clock delta is within tolerance: 78.785123ms
	I0612 21:38:23.576291   80762 start.go:83] releasing machines lock for "old-k8s-version-983302", held for 18.65227368s
	I0612 21:38:23.576316   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.576617   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:23.579493   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.579881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.579913   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.580120   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580693   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580865   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580952   80762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:23.581005   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.581111   80762 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:23.581141   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.584053   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584262   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584443   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584479   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584587   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584690   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584757   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.584855   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584918   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.584980   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.585067   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.585115   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.585227   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.666055   80762 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:23.688409   80762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:23.848030   80762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:23.855302   80762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:23.855383   80762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:23.874362   80762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:23.874389   80762 start.go:494] detecting cgroup driver to use...
	I0612 21:38:23.874461   80762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:23.893239   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:23.909774   80762 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:23.909844   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:23.926084   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:23.943341   80762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:24.072731   80762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:24.244551   80762 docker.go:233] disabling docker service ...
	I0612 21:38:24.244624   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:24.261862   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:24.277051   80762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:24.426146   80762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:24.560634   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:24.575339   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:24.595965   80762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0612 21:38:24.596043   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.607814   80762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:24.607892   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.619001   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.630982   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.644326   80762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:24.658640   80762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:24.673944   80762 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:24.673994   80762 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:24.693853   80762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:24.709251   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:24.856222   80762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:25.023760   80762 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:25.023842   80762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:25.029449   80762 start.go:562] Will wait 60s for crictl version
	I0612 21:38:25.029522   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:25.033750   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:25.080911   80762 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:25.081018   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.111727   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.145999   80762 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0612 21:38:22.512748   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:24.515486   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.602119   80157 main.go:141] libmachine: (no-preload-087875) Calling .Start
	I0612 21:38:23.602319   80157 main.go:141] libmachine: (no-preload-087875) Ensuring networks are active...
	I0612 21:38:23.603167   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network default is active
	I0612 21:38:23.603533   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network mk-no-preload-087875 is active
	I0612 21:38:23.603887   80157 main.go:141] libmachine: (no-preload-087875) Getting domain xml...
	I0612 21:38:23.604617   80157 main.go:141] libmachine: (no-preload-087875) Creating domain...
	I0612 21:38:24.978550   80157 main.go:141] libmachine: (no-preload-087875) Waiting to get IP...
	I0612 21:38:24.979551   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:24.979945   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:24.980007   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:24.979925   81986 retry.go:31] will retry after 224.557195ms: waiting for machine to come up
	I0612 21:38:25.206441   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.206928   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.206957   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.206875   81986 retry.go:31] will retry after 361.682908ms: waiting for machine to come up
	I0612 21:38:25.570564   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.571139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.571184   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.571089   81986 retry.go:31] will retry after 328.335873ms: waiting for machine to come up
	I0612 21:38:25.901471   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.902020   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.902054   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.901953   81986 retry.go:31] will retry after 505.408325ms: waiting for machine to come up
	I0612 21:38:26.408636   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:26.409139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:26.409167   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:26.409091   81986 retry.go:31] will retry after 749.519426ms: waiting for machine to come up
	I0612 21:38:27.160100   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.160563   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.160611   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.160537   81986 retry.go:31] will retry after 641.037463ms: waiting for machine to come up
	I0612 21:38:25.147420   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:25.151029   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151402   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:25.151432   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151726   80762 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:25.156561   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:25.171243   80762 kubeadm.go:877] updating cluster {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:25.171386   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:38:25.171429   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:25.225872   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:25.225936   80762 ssh_runner.go:195] Run: which lz4
	I0612 21:38:25.230447   80762 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:25.235452   80762 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:25.235485   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0612 21:38:27.033962   80762 crio.go:462] duration metric: took 1.803565745s to copy over tarball
	I0612 21:38:27.034045   80762 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:25.149629   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.651785   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:26.516743   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:29.013751   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.803722   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.804278   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.804316   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.804252   81986 retry.go:31] will retry after 1.184505978s: waiting for machine to come up
	I0612 21:38:28.990221   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:28.990736   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:28.990763   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:28.990709   81986 retry.go:31] will retry after 1.061139219s: waiting for machine to come up
	I0612 21:38:30.054187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:30.054768   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:30.054805   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:30.054718   81986 retry.go:31] will retry after 1.621121981s: waiting for machine to come up
	I0612 21:38:31.677355   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:31.677938   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:31.677966   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:31.677890   81986 retry.go:31] will retry after 2.17746309s: waiting for machine to come up
	I0612 21:38:30.212028   80762 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177947965s)
	I0612 21:38:30.212073   80762 crio.go:469] duration metric: took 3.178080815s to extract the tarball
	I0612 21:38:30.212085   80762 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:30.256957   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:30.297891   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:30.297917   80762 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:30.298025   80762 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.298045   80762 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.298055   80762 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.298021   80762 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0612 21:38:30.298106   80762 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.298062   80762 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.298004   80762 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.298079   80762 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0612 21:38:30.299842   80762 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.299848   80762 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.299843   80762 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.299866   80762 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.299876   80762 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.299905   80762 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.466739   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0612 21:38:30.516078   80762 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0612 21:38:30.516127   80762 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0612 21:38:30.516174   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.520362   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0612 21:38:30.545437   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.563320   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0612 21:38:30.599110   80762 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0612 21:38:30.599155   80762 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.599217   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.603578   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.639450   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0612 21:38:30.649462   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.650602   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.652555   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.656970   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.672136   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.766185   80762 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0612 21:38:30.766233   80762 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.766279   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.778901   80762 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0612 21:38:30.778946   80762 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.778952   80762 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0612 21:38:30.778983   80762 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.778994   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.779041   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.793610   80762 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0612 21:38:30.793650   80762 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.793698   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.807451   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.807482   80762 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0612 21:38:30.807518   80762 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.807458   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.807518   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.807557   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.807559   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.916470   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0612 21:38:30.916564   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0612 21:38:30.916576   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0612 21:38:30.916603   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0612 21:38:30.916646   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.953152   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0612 21:38:31.194046   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:31.341827   80762 cache_images.go:92] duration metric: took 1.043891497s to LoadCachedImages
	W0612 21:38:31.341922   80762 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0612 21:38:31.341937   80762 kubeadm.go:928] updating node { 192.168.50.81 8443 v1.20.0 crio true true} ...
	I0612 21:38:31.342064   80762 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-983302 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:31.342154   80762 ssh_runner.go:195] Run: crio config
	I0612 21:38:31.395673   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:38:31.395706   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:31.395722   80762 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:31.395744   80762 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-983302 NodeName:old-k8s-version-983302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0612 21:38:31.395918   80762 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-983302"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:31.395995   80762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0612 21:38:31.410706   80762 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:31.410785   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:31.425161   80762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0612 21:38:31.445883   80762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:31.463605   80762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0612 21:38:31.482797   80762 ssh_runner.go:195] Run: grep 192.168.50.81	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:31.486974   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:31.499681   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:31.645490   80762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:31.668769   80762 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302 for IP: 192.168.50.81
	I0612 21:38:31.668797   80762 certs.go:194] generating shared ca certs ...
	I0612 21:38:31.668820   80762 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:31.668987   80762 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:31.669061   80762 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:31.669088   80762 certs.go:256] generating profile certs ...
	I0612 21:38:31.669212   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.key
	I0612 21:38:31.669309   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key.1098c83c
	I0612 21:38:31.669373   80762 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key
	I0612 21:38:31.669548   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:31.669598   80762 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:31.669613   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:31.669662   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:31.669723   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:31.669759   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:31.669830   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:31.670835   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:31.717330   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:31.754900   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:31.798099   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:31.839647   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0612 21:38:31.883454   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:31.920765   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:31.953069   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0612 21:38:31.978134   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:32.002475   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:32.027784   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:32.053563   80762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:32.074493   80762 ssh_runner.go:195] Run: openssl version
	I0612 21:38:32.080620   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:32.093531   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098615   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098688   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.104777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:32.116551   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:32.130188   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135197   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135279   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.142777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:32.156051   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:32.169866   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175249   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175340   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.181561   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:32.193430   80762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:32.198235   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:32.204654   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:32.210771   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:32.216966   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:32.223203   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:32.230990   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:32.237290   80762 kubeadm.go:391] StartCluster: {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:32.237446   80762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:32.237503   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.282436   80762 cri.go:89] found id: ""
	I0612 21:38:32.282516   80762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:32.295283   80762 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:32.295313   80762 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:32.295321   80762 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:32.295400   80762 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:32.307483   80762 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:32.308555   80762 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-983302" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:32.309335   80762 kubeconfig.go:62] /home/jenkins/minikube-integration/17779-14199/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-983302" cluster setting kubeconfig missing "old-k8s-version-983302" context setting]
	I0612 21:38:32.310486   80762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:32.397524   80762 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:32.411765   80762 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.81
	I0612 21:38:32.411797   80762 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:32.411807   80762 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:32.411849   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.460009   80762 cri.go:89] found id: ""
	I0612 21:38:32.460078   80762 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:32.481670   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:32.493664   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:32.493684   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:32.493734   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:32.503974   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:32.504044   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:32.515971   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:32.525772   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:32.525832   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:32.537137   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.548539   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:32.548600   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.560401   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:32.570608   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:32.570681   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:32.582763   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:32.594407   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:32.734633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:30.151681   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.658859   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:31.658881   80404 pod_ready.go:81] duration metric: took 12.518130926s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:31.658890   80404 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:33.666360   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.357093   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.857141   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:33.857675   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:33.857702   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:33.857648   81986 retry.go:31] will retry after 2.485654549s: waiting for machine to come up
	I0612 21:38:36.344611   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:36.345117   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:36.345148   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:36.345075   81986 retry.go:31] will retry after 3.560063035s: waiting for machine to come up
	I0612 21:38:33.526337   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.768139   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.896716   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.986708   80762 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:33.986832   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.487194   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.987580   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.486966   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.487534   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.987526   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:37.487035   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.669161   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:35.513787   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.011903   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:39.907588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:39.908051   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:39.908110   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:39.907994   81986 retry.go:31] will retry after 4.524521166s: waiting for machine to come up
	I0612 21:38:37.986904   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.487262   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.986907   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.486895   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.987060   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.487385   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.987049   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.487325   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.987550   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:42.487225   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.665078   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.665731   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.666653   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:40.512741   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.513175   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:45.013451   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.434330   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434850   80157 main.go:141] libmachine: (no-preload-087875) Found IP for machine: 192.168.72.63
	I0612 21:38:44.434883   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has current primary IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434893   80157 main.go:141] libmachine: (no-preload-087875) Reserving static IP address...
	I0612 21:38:44.435324   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.435358   80157 main.go:141] libmachine: (no-preload-087875) Reserved static IP address: 192.168.72.63
	I0612 21:38:44.435378   80157 main.go:141] libmachine: (no-preload-087875) DBG | skip adding static IP to network mk-no-preload-087875 - found existing host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"}
	I0612 21:38:44.435388   80157 main.go:141] libmachine: (no-preload-087875) Waiting for SSH to be available...
	I0612 21:38:44.435397   80157 main.go:141] libmachine: (no-preload-087875) DBG | Getting to WaitForSSH function...
	I0612 21:38:44.437881   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438196   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.438218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438385   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH client type: external
	I0612 21:38:44.438414   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa (-rw-------)
	I0612 21:38:44.438452   80157 main.go:141] libmachine: (no-preload-087875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:44.438469   80157 main.go:141] libmachine: (no-preload-087875) DBG | About to run SSH command:
	I0612 21:38:44.438489   80157 main.go:141] libmachine: (no-preload-087875) DBG | exit 0
	I0612 21:38:44.571149   80157 main.go:141] libmachine: (no-preload-087875) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:44.571499   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetConfigRaw
	I0612 21:38:44.572172   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.574754   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575142   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.575187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575406   80157 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/config.json ...
	I0612 21:38:44.575580   80157 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:44.575595   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:44.575825   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.578584   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579008   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.579030   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579214   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.579394   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579534   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.579924   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.580096   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.580109   80157 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:44.691573   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:44.691609   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.691890   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:38:44.691914   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.692120   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.695218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695697   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.695729   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695783   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.695986   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696200   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696383   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.696572   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.696776   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.696794   80157 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-087875 && echo "no-preload-087875" | sudo tee /etc/hostname
	I0612 21:38:44.821857   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-087875
	
	I0612 21:38:44.821893   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.824821   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825263   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.825295   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825523   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.825740   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.825912   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.826024   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.826187   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.826406   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.826430   80157 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-087875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-087875/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-087875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:44.948871   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:44.948904   80157 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:44.948930   80157 buildroot.go:174] setting up certificates
	I0612 21:38:44.948941   80157 provision.go:84] configureAuth start
	I0612 21:38:44.948954   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.949247   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.952166   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952511   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.952538   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952662   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.955149   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955483   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.955505   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955658   80157 provision.go:143] copyHostCerts
	I0612 21:38:44.955731   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:44.955743   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:44.955807   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:44.955929   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:44.955942   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:44.955975   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:44.956052   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:44.956059   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:44.956078   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:44.956125   80157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.no-preload-087875 san=[127.0.0.1 192.168.72.63 localhost minikube no-preload-087875]
	I0612 21:38:45.138701   80157 provision.go:177] copyRemoteCerts
	I0612 21:38:45.138758   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:45.138781   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.141540   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142011   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.142055   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142199   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.142457   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.142603   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.142765   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.234480   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:45.259043   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0612 21:38:45.290511   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:45.316377   80157 provision.go:87] duration metric: took 367.423709ms to configureAuth
	I0612 21:38:45.316403   80157 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:45.316607   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:45.316684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.319596   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320160   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.320187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320384   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.320598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320778   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320973   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.321203   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.321368   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.321387   80157 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:45.611478   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:45.611511   80157 machine.go:97] duration metric: took 1.035919707s to provisionDockerMachine
	I0612 21:38:45.611523   80157 start.go:293] postStartSetup for "no-preload-087875" (driver="kvm2")
	I0612 21:38:45.611533   80157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:45.611556   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.611843   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:45.611862   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.615071   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615542   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.615582   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615715   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.615889   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.616028   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.616204   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.707710   80157 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:45.712155   80157 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:45.712177   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:45.712235   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:45.712301   80157 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:45.712386   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:45.722654   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:45.747626   80157 start.go:296] duration metric: took 136.091584ms for postStartSetup
	I0612 21:38:45.747666   80157 fix.go:56] duration metric: took 22.171227252s for fixHost
	I0612 21:38:45.747685   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.750588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.750972   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.750999   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.751231   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.751443   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751773   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.752005   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.752181   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.752195   80157 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:45.864042   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228325.837473906
	
	I0612 21:38:45.864068   80157 fix.go:216] guest clock: 1718228325.837473906
	I0612 21:38:45.864079   80157 fix.go:229] Guest: 2024-06-12 21:38:45.837473906 +0000 UTC Remote: 2024-06-12 21:38:45.747669277 +0000 UTC m=+358.493088442 (delta=89.804629ms)
	I0612 21:38:45.864106   80157 fix.go:200] guest clock delta is within tolerance: 89.804629ms
	I0612 21:38:45.864114   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 22.287706082s
	I0612 21:38:45.864152   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.864448   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:45.867230   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867603   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.867633   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867768   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868293   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868453   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868535   80157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:45.868575   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.868663   80157 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:45.868681   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.871218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871489   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871678   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.871719   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871915   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872061   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.872085   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872109   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.872240   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872246   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872522   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872529   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.872692   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872868   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.953249   80157 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:45.976778   80157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:46.124511   80157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:46.130509   80157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:46.130575   80157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:46.149670   80157 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:46.149691   80157 start.go:494] detecting cgroup driver to use...
	I0612 21:38:46.149755   80157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:46.167865   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:46.182896   80157 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:46.182951   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:46.197058   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:46.211517   80157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:46.331986   80157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:46.500675   80157 docker.go:233] disabling docker service ...
	I0612 21:38:46.500745   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:46.516858   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:46.530617   80157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:46.674917   80157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:46.810090   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:46.825079   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:46.843895   80157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:46.843963   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.854170   80157 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:46.854245   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.864699   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.875057   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.886063   80157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:46.897688   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.908984   80157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.926803   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.939373   80157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:46.948868   80157 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:46.948922   80157 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:46.963593   80157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:46.973735   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:47.108669   80157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:47.249938   80157 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:47.250044   80157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:47.255480   80157 start.go:562] Will wait 60s for crictl version
	I0612 21:38:47.255556   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.259730   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:47.303074   80157 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:47.303187   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.332225   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.363628   80157 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:42.987579   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.487465   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.987265   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.487935   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.987399   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.487793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.986898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.486985   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.986848   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.486947   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.164573   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.165711   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.512195   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.512366   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.365068   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:47.367703   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368079   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:47.368103   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368325   80157 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:47.372608   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:47.386411   80157 kubeadm.go:877] updating cluster {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:47.386750   80157 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:47.386796   80157 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:47.422165   80157 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:47.422189   80157 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:47.422227   80157 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.422280   80157 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.422355   80157 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.422370   80157 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.422311   80157 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.422347   80157 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.422318   80157 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0612 21:38:47.422599   80157 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.423599   80157 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0612 21:38:47.423610   80157 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.423612   80157 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.423630   80157 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.423626   80157 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.423699   80157 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.423737   80157 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.423720   80157 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.556807   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0612 21:38:47.557424   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.561887   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.569402   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.571880   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.576879   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.587848   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.759890   80157 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0612 21:38:47.759926   80157 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.759947   80157 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0612 21:38:47.759973   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.759976   80157 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0612 21:38:47.760006   80157 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.760015   80157 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0612 21:38:47.759977   80157 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.760061   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760063   80157 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.760075   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760073   80157 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0612 21:38:47.760091   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760101   80157 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.760164   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.766878   80157 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0612 21:38:47.766905   80157 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.766943   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.777168   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.777197   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.778459   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.779057   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.882668   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0612 21:38:47.882770   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.902416   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0612 21:38:47.902532   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:47.917388   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0612 21:38:47.917473   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0612 21:38:47.917501   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:47.917528   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:47.917545   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:47.917500   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0612 21:38:47.917558   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917594   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917502   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:47.917559   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0612 21:38:47.929251   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0612 21:38:47.929299   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0612 21:38:47.929308   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0612 21:38:48.312589   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.713720   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1: (2.796151375s)
	I0612 21:38:50.713767   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.796263274s)
	I0612 21:38:50.713901   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.401254109s)
	I0612 21:38:50.713921   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.713966   80157 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0612 21:38:50.713987   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.714017   80157 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.714063   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.987863   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.487299   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.986886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.486972   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.987859   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.487034   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.987724   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.486948   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:52.487668   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.665638   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.665855   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:51.512765   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:54.011870   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.169682   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.455668553s)
	I0612 21:38:53.169705   80157 ssh_runner.go:235] Completed: which crictl: (2.455619981s)
	I0612 21:38:53.169714   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0612 21:38:53.169741   80157 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.169759   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:53.169784   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.216895   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0612 21:38:53.217020   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:38:57.220343   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.050521066s)
	I0612 21:38:57.220376   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0612 21:38:57.220397   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220444   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220443   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (4.003396955s)
	I0612 21:38:57.220487   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0612 21:38:52.987635   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.487500   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.987860   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.487855   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.986868   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.487259   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.987902   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.487535   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.987269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:57.487542   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.166299   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.665085   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:56.012847   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.557142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.682288   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.46182102s)
	I0612 21:38:58.682313   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0612 21:38:58.682337   80157 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:58.682376   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:39:00.576373   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.893964365s)
	I0612 21:39:00.576412   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0612 21:39:00.576443   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:39:00.576504   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:57.987222   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.486976   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.986913   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.487269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.987289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.987690   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.487283   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.987541   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:02.487589   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.667732   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.165317   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:01.012684   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.015111   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:02.445930   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.86940281s)
	I0612 21:39:02.445960   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0612 21:39:02.445994   80157 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:02.446071   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:03.393330   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0612 21:39:03.393375   80157 cache_images.go:123] Successfully loaded all cached images
	I0612 21:39:03.393382   80157 cache_images.go:92] duration metric: took 15.9711807s to LoadCachedImages
	I0612 21:39:03.393397   80157 kubeadm.go:928] updating node { 192.168.72.63 8443 v1.30.1 crio true true} ...
	I0612 21:39:03.393543   80157 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-087875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:39:03.393658   80157 ssh_runner.go:195] Run: crio config
	I0612 21:39:03.448859   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:03.448884   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:03.448901   80157 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:39:03.448930   80157 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.63 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-087875 NodeName:no-preload-087875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:39:03.449103   80157 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-087875"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:39:03.449181   80157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:39:03.462756   80157 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:39:03.462825   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:39:03.472653   80157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0612 21:39:03.491567   80157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:39:03.509239   80157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0612 21:39:03.527802   80157 ssh_runner.go:195] Run: grep 192.168.72.63	control-plane.minikube.internal$ /etc/hosts
	I0612 21:39:03.531523   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:39:03.543748   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:39:03.666376   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:39:03.683563   80157 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875 for IP: 192.168.72.63
	I0612 21:39:03.683587   80157 certs.go:194] generating shared ca certs ...
	I0612 21:39:03.683606   80157 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:39:03.683766   80157 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:39:03.683816   80157 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:39:03.683831   80157 certs.go:256] generating profile certs ...
	I0612 21:39:03.683927   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/client.key
	I0612 21:39:03.684010   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key.13709275
	I0612 21:39:03.684066   80157 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key
	I0612 21:39:03.684217   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:39:03.684259   80157 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:39:03.684272   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:39:03.684318   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:39:03.684364   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:39:03.684395   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:39:03.684455   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:39:03.685098   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:39:03.732817   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:39:03.771449   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:39:03.800774   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:39:03.831845   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 21:39:03.862000   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 21:39:03.901036   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:39:03.925025   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:39:03.950862   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:39:03.974222   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:39:04.002698   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:39:04.028173   80157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:39:04.044685   80157 ssh_runner.go:195] Run: openssl version
	I0612 21:39:04.050600   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:39:04.061893   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066371   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066424   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.072463   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:39:04.083929   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:39:04.094777   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099380   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099435   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.105125   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:39:04.116191   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:39:04.127408   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132234   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132315   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.138401   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:39:04.149542   80157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:39:04.154133   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:39:04.160171   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:39:04.166410   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:39:04.172650   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:39:04.178506   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:39:04.184375   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:39:04.190412   80157 kubeadm.go:391] StartCluster: {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:39:04.190524   80157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:39:04.190584   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.235297   80157 cri.go:89] found id: ""
	I0612 21:39:04.235362   80157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:39:04.246400   80157 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:39:04.246429   80157 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:39:04.246449   80157 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:39:04.246499   80157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:39:04.257137   80157 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:39:04.258277   80157 kubeconfig.go:125] found "no-preload-087875" server: "https://192.168.72.63:8443"
	I0612 21:39:04.260656   80157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:39:04.270637   80157 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.63
	I0612 21:39:04.270666   80157 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:39:04.270675   80157 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:39:04.270730   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.316487   80157 cri.go:89] found id: ""
	I0612 21:39:04.316550   80157 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:39:04.334814   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:39:04.346430   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:39:04.346451   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:39:04.346500   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:39:04.356362   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:39:04.356417   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:39:04.366999   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:39:04.378005   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:39:04.378061   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:39:04.388052   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.397130   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:39:04.397185   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.407053   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:39:04.416338   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:39:04.416395   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:39:04.426475   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:39:04.436852   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:04.565452   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.461610   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.676493   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.767236   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.870855   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:39:05.870960   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.372034   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.871680   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.906242   80157 api_server.go:72] duration metric: took 1.035387498s to wait for apiserver process to appear ...
	I0612 21:39:06.906273   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:39:06.906296   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:06.906883   80157 api_server.go:269] stopped: https://192.168.72.63:8443/healthz: Get "https://192.168.72.63:8443/healthz": dial tcp 192.168.72.63:8443: connect: connection refused
	I0612 21:39:02.987853   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.487382   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.987303   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.487852   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.987464   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.486928   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.987660   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:07.487497   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.166502   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.665452   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:09.665766   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:05.512792   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:08.012392   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:10.014073   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.407227   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.589285   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:39:09.589319   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:39:09.589336   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.726716   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.726753   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:09.907032   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.917718   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.917746   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.406997   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.412127   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:10.412156   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.906700   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.911262   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:39:10.918778   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:39:10.918813   80157 api_server.go:131] duration metric: took 4.012531107s to wait for apiserver health ...
	I0612 21:39:10.918824   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:10.918832   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:10.921012   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:39:10.922401   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:39:10.948209   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:39:10.974530   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:39:10.986054   80157 system_pods.go:59] 8 kube-system pods found
	I0612 21:39:10.986091   80157 system_pods.go:61] "coredns-7db6d8ff4d-sh68b" [17691219-bfda-443b-8049-e6e966aadb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:39:10.986102   80157 system_pods.go:61] "etcd-no-preload-087875" [3048b12a-4354-45fd-99c7-d2a84035e102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:39:10.986114   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [0f39a5fd-1a64-479f-bb28-c19bc10b7ed3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:39:10.986127   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [62cc49b8-b05f-4371-aa17-bea17d08d2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:39:10.986141   80157 system_pods.go:61] "kube-proxy-htv9h" [e3eb4693-7896-4dd2-98b8-91f06b028a1e] Running
	I0612 21:39:10.986158   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [ef833b9d-75ca-43bd-b196-30594775b174] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:39:10.986170   80157 system_pods.go:61] "metrics-server-569cc877fc-d5mj6" [79ba2aad-c942-4162-b69a-5c7dd138a618] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:39:10.986178   80157 system_pods.go:61] "storage-provisioner" [5793c778-1a5c-4cfe-924a-b85b72df53cd] Running
	I0612 21:39:10.986187   80157 system_pods.go:74] duration metric: took 11.634011ms to wait for pod list to return data ...
	I0612 21:39:10.986199   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:39:10.992801   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:39:10.992843   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:39:10.992856   80157 node_conditions.go:105] duration metric: took 6.648025ms to run NodePressure ...
	I0612 21:39:10.992878   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:11.263413   80157 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271758   80157 kubeadm.go:733] kubelet initialised
	I0612 21:39:11.271781   80157 kubeadm.go:734] duration metric: took 8.347232ms waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271789   80157 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:39:11.277940   80157 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:07.987732   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.486974   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.486941   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.986929   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.487754   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.987685   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.486910   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.987457   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.486873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.165604   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.166986   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.029928   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.512085   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:13.287555   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:15.786345   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.987394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.486915   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.987880   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.486881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.986951   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.487462   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.986850   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.487213   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.987066   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:17.487882   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.666123   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.666354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:16.512936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:19.013463   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.285110   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:20.788396   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.284869   80157 pod_ready.go:92] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:21.284902   80157 pod_ready.go:81] duration metric: took 10.006929439s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:21.284916   80157 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:17.987273   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.987836   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.487622   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.987381   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.487005   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.987638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.487670   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.987552   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:22.487438   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.665272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.512836   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:24.014108   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.291502   80157 pod_ready.go:102] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:25.791813   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.791842   80157 pod_ready.go:81] duration metric: took 4.506916362s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.791854   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796901   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.796928   80157 pod_ready.go:81] duration metric: took 5.066599ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796939   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801550   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.801571   80157 pod_ready.go:81] duration metric: took 4.624771ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801580   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806178   80157 pod_ready.go:92] pod "kube-proxy-htv9h" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.806195   80157 pod_ready.go:81] duration metric: took 4.609956ms for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806204   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809883   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.809902   80157 pod_ready.go:81] duration metric: took 3.691999ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809914   80157 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:22.987165   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.487122   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.987804   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.487583   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.987647   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.487126   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.987251   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.987044   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:27.486911   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.668272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:28.164809   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:26.513220   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:29.013047   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.817352   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:30.315600   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.487496   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.987166   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.487892   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.987787   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.487315   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.987933   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.487255   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:32.487881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.165900   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.167795   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.665939   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:31.013473   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:33.015281   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.316680   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.317063   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:36.816905   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.987267   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.487678   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.987296   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:33.987371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:34.028670   80762 cri.go:89] found id: ""
	I0612 21:39:34.028699   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.028710   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:34.028717   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:34.028778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:34.068371   80762 cri.go:89] found id: ""
	I0612 21:39:34.068400   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.068412   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:34.068419   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:34.068485   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:34.104605   80762 cri.go:89] found id: ""
	I0612 21:39:34.104634   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.104643   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:34.104650   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:34.104745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:34.150301   80762 cri.go:89] found id: ""
	I0612 21:39:34.150327   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.150335   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:34.150341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:34.150396   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:34.191426   80762 cri.go:89] found id: ""
	I0612 21:39:34.191462   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.191475   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:34.191484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:34.191562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:34.228483   80762 cri.go:89] found id: ""
	I0612 21:39:34.228523   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.228535   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:34.228543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:34.228653   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:34.262834   80762 cri.go:89] found id: ""
	I0612 21:39:34.262863   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.262873   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:34.262881   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:34.262944   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:34.298283   80762 cri.go:89] found id: ""
	I0612 21:39:34.298312   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.298321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:34.298330   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:34.298340   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:34.350889   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:34.350918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:34.365264   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:34.365289   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:34.508130   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:34.508162   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:34.508180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:34.572036   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:34.572076   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.114371   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:37.127410   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:37.127492   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:37.168684   80762 cri.go:89] found id: ""
	I0612 21:39:37.168705   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.168714   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:37.168723   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:37.168798   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:37.208765   80762 cri.go:89] found id: ""
	I0612 21:39:37.208797   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.208808   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:37.208815   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:37.208875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:37.266245   80762 cri.go:89] found id: ""
	I0612 21:39:37.266270   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.266277   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:37.266283   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:37.266331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:37.313557   80762 cri.go:89] found id: ""
	I0612 21:39:37.313586   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.313597   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:37.313606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:37.313677   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:37.353292   80762 cri.go:89] found id: ""
	I0612 21:39:37.353318   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.353325   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:37.353332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:37.353389   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:37.391940   80762 cri.go:89] found id: ""
	I0612 21:39:37.391974   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.391984   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:37.392015   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:37.392078   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:37.432133   80762 cri.go:89] found id: ""
	I0612 21:39:37.432154   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.432166   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:37.432174   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:37.432228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:37.468274   80762 cri.go:89] found id: ""
	I0612 21:39:37.468302   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.468310   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:37.468328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:37.468347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:37.543904   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:37.543941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.586957   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:37.586982   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:37.641247   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:37.641288   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:37.657076   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:37.657101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:37.729279   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:37.165427   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.166383   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:35.512174   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:37.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.012806   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.317119   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:41.817268   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.229638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:40.243825   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:40.243889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:40.282795   80762 cri.go:89] found id: ""
	I0612 21:39:40.282821   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.282829   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:40.282834   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:40.282879   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:40.320211   80762 cri.go:89] found id: ""
	I0612 21:39:40.320236   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.320246   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:40.320252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:40.320338   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:40.356270   80762 cri.go:89] found id: ""
	I0612 21:39:40.356292   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.356300   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:40.356306   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:40.356353   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:40.394667   80762 cri.go:89] found id: ""
	I0612 21:39:40.394691   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.394699   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:40.394704   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:40.394751   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:40.432765   80762 cri.go:89] found id: ""
	I0612 21:39:40.432794   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.432804   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:40.432811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:40.432883   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:40.472347   80762 cri.go:89] found id: ""
	I0612 21:39:40.472386   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.472406   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:40.472414   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:40.472477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:40.508414   80762 cri.go:89] found id: ""
	I0612 21:39:40.508445   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.508456   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:40.508464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:40.508521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:40.546938   80762 cri.go:89] found id: ""
	I0612 21:39:40.546964   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.546972   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:40.546981   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:40.546993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:40.621356   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:40.621380   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:40.621398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:40.703830   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:40.703865   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:40.744915   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:40.744965   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:40.798883   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:40.798920   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:41.167469   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.667403   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:42.512351   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.512639   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.317053   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.317350   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.315905   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:43.330150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:43.330221   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:43.377307   80762 cri.go:89] found id: ""
	I0612 21:39:43.377337   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.377347   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:43.377362   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:43.377426   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:43.412608   80762 cri.go:89] found id: ""
	I0612 21:39:43.412638   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.412648   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:43.412654   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:43.412718   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:43.446716   80762 cri.go:89] found id: ""
	I0612 21:39:43.446746   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.446755   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:43.446762   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:43.446823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:43.484607   80762 cri.go:89] found id: ""
	I0612 21:39:43.484636   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.484647   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:43.484655   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:43.484700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:43.522400   80762 cri.go:89] found id: ""
	I0612 21:39:43.522427   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.522438   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:43.522445   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:43.522529   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:43.559121   80762 cri.go:89] found id: ""
	I0612 21:39:43.559147   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.559163   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:43.559211   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:43.559292   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:43.595886   80762 cri.go:89] found id: ""
	I0612 21:39:43.595919   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.595937   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:43.595945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:43.596011   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:43.638549   80762 cri.go:89] found id: ""
	I0612 21:39:43.638573   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.638583   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:43.638594   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:43.638609   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:43.705300   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:43.705338   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:43.723246   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:43.723281   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:43.807735   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:43.807760   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:43.807870   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:43.882971   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:43.883017   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.421476   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:46.434447   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:46.434532   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:46.470710   80762 cri.go:89] found id: ""
	I0612 21:39:46.470745   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.470758   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:46.470765   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:46.470828   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:46.504843   80762 cri.go:89] found id: ""
	I0612 21:39:46.504871   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.504878   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:46.504884   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:46.504941   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:46.542937   80762 cri.go:89] found id: ""
	I0612 21:39:46.542965   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.542973   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:46.542979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:46.543035   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:46.581098   80762 cri.go:89] found id: ""
	I0612 21:39:46.581124   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.581133   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:46.581143   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:46.581189   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:46.617289   80762 cri.go:89] found id: ""
	I0612 21:39:46.617319   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.617329   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:46.617337   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:46.617402   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:46.651012   80762 cri.go:89] found id: ""
	I0612 21:39:46.651045   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.651057   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:46.651070   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:46.651141   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:46.688344   80762 cri.go:89] found id: ""
	I0612 21:39:46.688370   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.688379   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:46.688388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:46.688451   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:46.724349   80762 cri.go:89] found id: ""
	I0612 21:39:46.724374   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.724382   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:46.724390   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:46.724404   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:46.797866   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:46.797894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:46.797912   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:46.887520   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:46.887557   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.928143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:46.928182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:46.981416   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:46.981451   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:46.164845   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.166925   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.513519   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.016041   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.816335   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:50.816407   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.497028   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:49.510077   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:49.510147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:49.544313   80762 cri.go:89] found id: ""
	I0612 21:39:49.544349   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.544359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:49.544365   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:49.544416   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:49.580220   80762 cri.go:89] found id: ""
	I0612 21:39:49.580248   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.580256   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:49.580262   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:49.580316   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:49.619582   80762 cri.go:89] found id: ""
	I0612 21:39:49.619607   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.619615   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:49.619620   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:49.619692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:49.656453   80762 cri.go:89] found id: ""
	I0612 21:39:49.656479   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.656487   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:49.656493   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:49.656557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:49.694285   80762 cri.go:89] found id: ""
	I0612 21:39:49.694318   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.694330   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:49.694338   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:49.694417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:49.731100   80762 cri.go:89] found id: ""
	I0612 21:39:49.731127   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.731135   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:49.731140   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:49.731209   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:49.767709   80762 cri.go:89] found id: ""
	I0612 21:39:49.767731   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.767738   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:49.767744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:49.767787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:49.801231   80762 cri.go:89] found id: ""
	I0612 21:39:49.801265   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.801283   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:49.801294   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:49.801309   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:49.848500   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:49.848542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:49.900084   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:49.900121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:49.916208   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:49.916234   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:49.983283   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:49.983310   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:49.983325   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:52.566884   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:52.580400   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:52.580476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:52.615922   80762 cri.go:89] found id: ""
	I0612 21:39:52.615957   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.615970   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:52.615978   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:52.616038   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:52.657316   80762 cri.go:89] found id: ""
	I0612 21:39:52.657348   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.657356   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:52.657362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:52.657417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:52.692426   80762 cri.go:89] found id: ""
	I0612 21:39:52.692459   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.692470   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:52.692478   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:52.692542   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:52.726800   80762 cri.go:89] found id: ""
	I0612 21:39:52.726835   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.726848   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:52.726856   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:52.726921   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:52.764283   80762 cri.go:89] found id: ""
	I0612 21:39:52.764314   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.764326   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:52.764341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:52.764395   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:52.802279   80762 cri.go:89] found id: ""
	I0612 21:39:52.802311   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.802324   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:52.802331   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:52.802385   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:52.841433   80762 cri.go:89] found id: ""
	I0612 21:39:52.841466   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.841477   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:52.841484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:52.841546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:50.667322   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.165294   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:51.016137   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.019373   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.818876   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.316845   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.881417   80762 cri.go:89] found id: ""
	I0612 21:39:52.881441   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.881449   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:52.881457   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:52.881468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:52.936228   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:52.936262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:52.950688   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:52.950718   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:53.025101   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:53.025122   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:53.025138   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:53.114986   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:53.115031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:55.653893   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:55.668983   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:55.669047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:55.708445   80762 cri.go:89] found id: ""
	I0612 21:39:55.708475   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.708486   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:55.708494   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:55.708558   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:55.745158   80762 cri.go:89] found id: ""
	I0612 21:39:55.745185   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.745195   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:55.745204   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:55.745270   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:55.785322   80762 cri.go:89] found id: ""
	I0612 21:39:55.785344   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.785363   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:55.785370   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:55.785442   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:55.822371   80762 cri.go:89] found id: ""
	I0612 21:39:55.822397   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.822408   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:55.822416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:55.822484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:55.856866   80762 cri.go:89] found id: ""
	I0612 21:39:55.856888   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.856895   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:55.856900   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:55.856954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:55.891618   80762 cri.go:89] found id: ""
	I0612 21:39:55.891648   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.891660   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:55.891668   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:55.891731   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:55.927483   80762 cri.go:89] found id: ""
	I0612 21:39:55.927504   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.927513   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:55.927519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:55.927572   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:55.963546   80762 cri.go:89] found id: ""
	I0612 21:39:55.963572   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.963584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:55.963597   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:55.963616   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:56.037421   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:56.037442   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:56.037453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:56.112148   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:56.112185   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:56.163359   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:56.163389   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:56.217109   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:56.217144   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:55.166499   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.665517   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.665625   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.513267   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.015558   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.317149   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.320306   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:01.815855   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.733278   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:58.746890   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:58.746951   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:58.785222   80762 cri.go:89] found id: ""
	I0612 21:39:58.785252   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.785263   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:58.785269   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:58.785343   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:58.824421   80762 cri.go:89] found id: ""
	I0612 21:39:58.824448   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.824455   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:58.824461   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:58.824521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:58.863626   80762 cri.go:89] found id: ""
	I0612 21:39:58.863658   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.863669   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:58.863728   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:58.863818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:58.904040   80762 cri.go:89] found id: ""
	I0612 21:39:58.904064   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.904073   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:58.904080   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:58.904147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:58.937508   80762 cri.go:89] found id: ""
	I0612 21:39:58.937543   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.937557   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:58.937565   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:58.937632   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:58.974283   80762 cri.go:89] found id: ""
	I0612 21:39:58.974311   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.974322   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:58.974330   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:58.974383   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:59.009954   80762 cri.go:89] found id: ""
	I0612 21:39:59.009987   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.009999   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:59.010007   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:59.010072   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:59.051911   80762 cri.go:89] found id: ""
	I0612 21:39:59.051935   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.051943   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:59.051951   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:59.051961   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:59.102911   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:59.102942   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:59.116576   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:59.116608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:59.189590   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:59.189619   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:59.189634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:59.270192   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:59.270232   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.820872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:01.834916   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:01.835000   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:01.870526   80762 cri.go:89] found id: ""
	I0612 21:40:01.870560   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.870572   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:01.870579   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:01.870642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:01.909581   80762 cri.go:89] found id: ""
	I0612 21:40:01.909614   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.909626   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:01.909633   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:01.909727   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:01.947944   80762 cri.go:89] found id: ""
	I0612 21:40:01.947976   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.947988   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:01.947995   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:01.948059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:01.985745   80762 cri.go:89] found id: ""
	I0612 21:40:01.985781   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.985793   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:01.985800   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:01.985860   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:02.023716   80762 cri.go:89] found id: ""
	I0612 21:40:02.023741   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.023749   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:02.023754   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:02.023801   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:02.059136   80762 cri.go:89] found id: ""
	I0612 21:40:02.059168   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.059203   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:02.059212   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:02.059283   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:02.104520   80762 cri.go:89] found id: ""
	I0612 21:40:02.104544   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.104552   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:02.104558   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:02.104618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:02.146130   80762 cri.go:89] found id: ""
	I0612 21:40:02.146164   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.146176   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:02.146187   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:02.146202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:02.199672   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:02.199710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:02.215224   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:02.215256   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:02.290030   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:02.290057   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:02.290072   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:02.374579   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:02.374615   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.667390   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.165253   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:00.512229   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:02.513298   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.018848   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:03.816610   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.818990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.915345   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:04.928323   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:04.928404   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:04.963267   80762 cri.go:89] found id: ""
	I0612 21:40:04.963297   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.963310   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:04.963319   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:04.963386   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:04.998378   80762 cri.go:89] found id: ""
	I0612 21:40:04.998409   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.998420   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:04.998426   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:04.998498   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:05.038094   80762 cri.go:89] found id: ""
	I0612 21:40:05.038118   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.038126   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:05.038132   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:05.038181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:05.074331   80762 cri.go:89] found id: ""
	I0612 21:40:05.074366   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.074379   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:05.074386   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:05.074462   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:05.109332   80762 cri.go:89] found id: ""
	I0612 21:40:05.109359   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.109368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:05.109373   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:05.109423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:05.143875   80762 cri.go:89] found id: ""
	I0612 21:40:05.143908   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.143918   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:05.143926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:05.143990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:05.183695   80762 cri.go:89] found id: ""
	I0612 21:40:05.183724   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.183731   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:05.183737   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:05.183792   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:05.222852   80762 cri.go:89] found id: ""
	I0612 21:40:05.222878   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.222887   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:05.222895   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:05.222907   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:05.262661   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:05.262687   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:05.315563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:05.315593   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:05.332128   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:05.332163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:05.411675   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:05.411699   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:05.411712   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:06.665324   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.667163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.512587   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.012843   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.316990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.816093   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.991930   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:08.005743   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:08.005807   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:08.041685   80762 cri.go:89] found id: ""
	I0612 21:40:08.041714   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.041724   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:08.041732   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:08.041791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:08.080875   80762 cri.go:89] found id: ""
	I0612 21:40:08.080905   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.080916   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:08.080925   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:08.080993   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:08.117290   80762 cri.go:89] found id: ""
	I0612 21:40:08.117316   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.117323   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:08.117329   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:08.117387   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:08.154345   80762 cri.go:89] found id: ""
	I0612 21:40:08.154376   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.154387   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:08.154395   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:08.154459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:08.192913   80762 cri.go:89] found id: ""
	I0612 21:40:08.192947   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.192957   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:08.192969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:08.193033   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:08.235732   80762 cri.go:89] found id: ""
	I0612 21:40:08.235764   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.235775   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:08.235782   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:08.235853   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:08.274282   80762 cri.go:89] found id: ""
	I0612 21:40:08.274306   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.274314   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:08.274320   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:08.274366   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:08.314585   80762 cri.go:89] found id: ""
	I0612 21:40:08.314608   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.314619   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:08.314628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:08.314641   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:08.331693   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:08.331725   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:08.414541   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:08.414565   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:08.414584   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:08.496428   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:08.496460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:08.546991   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:08.547020   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.099778   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:11.113450   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:11.113539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:11.150426   80762 cri.go:89] found id: ""
	I0612 21:40:11.150451   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.150459   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:11.150464   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:11.150524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:11.189931   80762 cri.go:89] found id: ""
	I0612 21:40:11.189958   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.189967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:11.189972   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:11.190031   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:11.228116   80762 cri.go:89] found id: ""
	I0612 21:40:11.228144   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.228154   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:11.228161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:11.228243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:11.268639   80762 cri.go:89] found id: ""
	I0612 21:40:11.268664   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.268672   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:11.268678   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:11.268723   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:11.306077   80762 cri.go:89] found id: ""
	I0612 21:40:11.306105   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.306116   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:11.306123   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:11.306187   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:11.344360   80762 cri.go:89] found id: ""
	I0612 21:40:11.344388   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.344399   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:11.344418   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:11.344475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:11.382906   80762 cri.go:89] found id: ""
	I0612 21:40:11.382937   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.382948   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:11.382957   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:11.383027   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:11.418388   80762 cri.go:89] found id: ""
	I0612 21:40:11.418419   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.418429   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:11.418439   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:11.418453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:11.432204   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:11.432241   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:11.508219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:11.508251   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:11.508263   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:11.593021   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:11.593058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:11.634056   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:11.634087   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.165384   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:13.170153   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.013303   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.013454   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.817129   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:15.316929   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.187831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:14.203153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:14.203248   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:14.239693   80762 cri.go:89] found id: ""
	I0612 21:40:14.239716   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.239723   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:14.239729   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:14.239827   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:14.273206   80762 cri.go:89] found id: ""
	I0612 21:40:14.273234   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.273244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:14.273251   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:14.273313   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:14.315512   80762 cri.go:89] found id: ""
	I0612 21:40:14.315592   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.315610   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:14.315618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:14.315679   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:14.352454   80762 cri.go:89] found id: ""
	I0612 21:40:14.352483   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.352496   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:14.352504   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:14.352554   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:14.387845   80762 cri.go:89] found id: ""
	I0612 21:40:14.387872   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.387880   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:14.387886   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:14.387935   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:14.423220   80762 cri.go:89] found id: ""
	I0612 21:40:14.423245   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.423254   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:14.423259   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:14.423322   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:14.457744   80762 cri.go:89] found id: ""
	I0612 21:40:14.457772   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.457784   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:14.457791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:14.457849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:14.493580   80762 cri.go:89] found id: ""
	I0612 21:40:14.493611   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.493622   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:14.493633   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:14.493669   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:14.566867   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:14.566894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:14.566913   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:14.645916   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:14.645959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:14.690232   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:14.690262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:14.741532   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:14.741576   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.257886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:17.271841   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:17.271910   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:17.309628   80762 cri.go:89] found id: ""
	I0612 21:40:17.309654   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.309667   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:17.309675   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:17.309746   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:17.346671   80762 cri.go:89] found id: ""
	I0612 21:40:17.346752   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.346769   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:17.346777   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:17.346842   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:17.381145   80762 cri.go:89] found id: ""
	I0612 21:40:17.381169   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.381177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:17.381184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:17.381241   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:17.417159   80762 cri.go:89] found id: ""
	I0612 21:40:17.417179   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.417187   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:17.417194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:17.417254   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:17.453189   80762 cri.go:89] found id: ""
	I0612 21:40:17.453213   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.453220   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:17.453226   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:17.453284   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:17.510988   80762 cri.go:89] found id: ""
	I0612 21:40:17.511012   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.511019   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:17.511026   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:17.511083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:17.548141   80762 cri.go:89] found id: ""
	I0612 21:40:17.548166   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.548176   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:17.548182   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:17.548243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:17.584591   80762 cri.go:89] found id: ""
	I0612 21:40:17.584619   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.584627   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:17.584637   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:17.584647   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:17.628627   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:17.628662   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:17.682792   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:17.682823   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.697921   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:17.697959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:17.770591   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:17.770617   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:17.770633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:15.665831   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.165059   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:16.014130   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:17.817443   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.316576   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.350181   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:20.363671   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:20.363743   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:20.399858   80762 cri.go:89] found id: ""
	I0612 21:40:20.399889   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.399896   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:20.399903   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:20.399963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:20.437715   80762 cri.go:89] found id: ""
	I0612 21:40:20.437755   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.437766   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:20.437776   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:20.437843   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:20.472525   80762 cri.go:89] found id: ""
	I0612 21:40:20.472558   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.472573   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:20.472582   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:20.472642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:20.507923   80762 cri.go:89] found id: ""
	I0612 21:40:20.507948   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.507959   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:20.507966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:20.508029   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:20.545471   80762 cri.go:89] found id: ""
	I0612 21:40:20.545502   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.545512   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:20.545519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:20.545586   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:20.583793   80762 cri.go:89] found id: ""
	I0612 21:40:20.583829   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.583839   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:20.583846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:20.583912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:20.624399   80762 cri.go:89] found id: ""
	I0612 21:40:20.624438   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.624449   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:20.624467   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:20.624530   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:20.665158   80762 cri.go:89] found id: ""
	I0612 21:40:20.665184   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.665194   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:20.665203   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:20.665217   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:20.743062   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:20.743101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:20.792573   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:20.792613   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:20.847998   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:20.848033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:20.863447   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:20.863497   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:20.938020   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:20.165455   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.665110   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.665262   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.513556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.014750   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.316950   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.815377   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:26.817066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.438289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:23.453792   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:23.453855   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:23.494044   80762 cri.go:89] found id: ""
	I0612 21:40:23.494070   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.494077   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:23.494083   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:23.494144   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:23.533278   80762 cri.go:89] found id: ""
	I0612 21:40:23.533305   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.533313   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:23.533319   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:23.533380   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:23.568504   80762 cri.go:89] found id: ""
	I0612 21:40:23.568538   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.568549   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:23.568556   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:23.568619   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:23.610596   80762 cri.go:89] found id: ""
	I0612 21:40:23.610624   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.610633   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:23.610638   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:23.610690   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:23.651856   80762 cri.go:89] found id: ""
	I0612 21:40:23.651886   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.651896   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:23.651903   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:23.651978   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:23.690989   80762 cri.go:89] found id: ""
	I0612 21:40:23.691020   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.691030   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:23.691036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:23.691089   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:23.730417   80762 cri.go:89] found id: ""
	I0612 21:40:23.730454   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.730467   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:23.730476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:23.730538   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:23.773887   80762 cri.go:89] found id: ""
	I0612 21:40:23.773913   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.773921   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:23.773932   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:23.773947   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:23.825771   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:23.825805   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:23.840136   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:23.840163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:23.933645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:23.933670   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:23.933686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:24.020205   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:24.020243   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.566746   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:26.579557   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:26.579612   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:26.614721   80762 cri.go:89] found id: ""
	I0612 21:40:26.614749   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.614757   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:26.614763   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:26.614815   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:26.651398   80762 cri.go:89] found id: ""
	I0612 21:40:26.651427   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.651437   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:26.651445   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:26.651506   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:26.688217   80762 cri.go:89] found id: ""
	I0612 21:40:26.688249   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.688261   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:26.688268   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:26.688333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:26.721316   80762 cri.go:89] found id: ""
	I0612 21:40:26.721346   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.721357   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:26.721364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:26.721424   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:26.758842   80762 cri.go:89] found id: ""
	I0612 21:40:26.758868   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.758878   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:26.758885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:26.758957   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:26.795696   80762 cri.go:89] found id: ""
	I0612 21:40:26.795725   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.795733   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:26.795738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:26.795788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:26.834903   80762 cri.go:89] found id: ""
	I0612 21:40:26.834932   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.834941   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:26.834947   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:26.835020   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:26.872751   80762 cri.go:89] found id: ""
	I0612 21:40:26.872788   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.872796   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:26.872805   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:26.872817   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:26.952401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:26.952440   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.990548   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:26.990583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:27.042973   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:27.043029   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:27.058348   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:27.058379   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:27.133047   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:26.666430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.165063   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:25.513982   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:28.012556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:30.017664   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.315668   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:31.316817   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.634105   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:29.654113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:29.654171   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:29.700138   80762 cri.go:89] found id: ""
	I0612 21:40:29.700169   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.700179   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:29.700188   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:29.700260   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:29.751599   80762 cri.go:89] found id: ""
	I0612 21:40:29.751628   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.751638   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:29.751646   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:29.751699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:29.801971   80762 cri.go:89] found id: ""
	I0612 21:40:29.801995   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.802003   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:29.802008   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:29.802059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:29.839381   80762 cri.go:89] found id: ""
	I0612 21:40:29.839407   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.839418   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:29.839426   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:29.839484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:29.876634   80762 cri.go:89] found id: ""
	I0612 21:40:29.876661   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.876668   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:29.876675   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:29.876721   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:29.909673   80762 cri.go:89] found id: ""
	I0612 21:40:29.909707   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.909718   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:29.909726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:29.909791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:29.947984   80762 cri.go:89] found id: ""
	I0612 21:40:29.948019   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.948029   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:29.948037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:29.948099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:29.988611   80762 cri.go:89] found id: ""
	I0612 21:40:29.988639   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.988650   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:29.988660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:29.988675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:30.073180   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:30.073216   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:30.114703   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:30.114732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:30.173242   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:30.173278   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:30.189081   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:30.189112   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:30.263564   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:32.763967   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:32.776738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:32.776808   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:32.813088   80762 cri.go:89] found id: ""
	I0612 21:40:32.813115   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.813125   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:32.813132   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:32.813195   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:32.850960   80762 cri.go:89] found id: ""
	I0612 21:40:32.850987   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.850996   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:32.851004   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:32.851065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:31.166578   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.669302   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.512480   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:34.512817   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.815867   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:35.817105   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.887229   80762 cri.go:89] found id: ""
	I0612 21:40:32.887259   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.887270   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:32.887277   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:32.887346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:32.923123   80762 cri.go:89] found id: ""
	I0612 21:40:32.923148   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.923158   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:32.923164   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:32.923242   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:32.962603   80762 cri.go:89] found id: ""
	I0612 21:40:32.962628   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.962638   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:32.962644   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:32.962695   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:32.998971   80762 cri.go:89] found id: ""
	I0612 21:40:32.999025   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.999037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:32.999046   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:32.999120   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:33.037640   80762 cri.go:89] found id: ""
	I0612 21:40:33.037670   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.037680   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:33.037686   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:33.037748   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:33.073758   80762 cri.go:89] found id: ""
	I0612 21:40:33.073787   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.073794   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:33.073804   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:33.073815   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:33.124478   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:33.124512   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:33.139010   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:33.139036   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:33.207693   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:33.207716   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:33.207732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:33.287710   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:33.287746   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:35.831654   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:35.845783   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:35.845845   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:35.882097   80762 cri.go:89] found id: ""
	I0612 21:40:35.882129   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.882141   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:35.882149   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:35.882205   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:35.920931   80762 cri.go:89] found id: ""
	I0612 21:40:35.920972   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.920980   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:35.920985   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:35.921061   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:35.958689   80762 cri.go:89] found id: ""
	I0612 21:40:35.958712   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.958721   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:35.958726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:35.958774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:35.994973   80762 cri.go:89] found id: ""
	I0612 21:40:35.995028   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.995040   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:35.995048   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:35.995114   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:36.035679   80762 cri.go:89] found id: ""
	I0612 21:40:36.035707   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.035715   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:36.035721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:36.035768   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:36.071498   80762 cri.go:89] found id: ""
	I0612 21:40:36.071525   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.071534   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:36.071544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:36.071594   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:36.107367   80762 cri.go:89] found id: ""
	I0612 21:40:36.107397   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.107406   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:36.107413   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:36.107466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:36.148668   80762 cri.go:89] found id: ""
	I0612 21:40:36.148699   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.148710   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:36.148721   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:36.148736   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:36.207719   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:36.207765   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:36.223129   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:36.223158   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:36.290786   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:36.290809   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:36.290822   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:36.375361   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:36.375398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:36.165430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.165989   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:37.015936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:39.513497   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.318886   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:40.815802   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.921100   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:38.935420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:38.935491   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:38.970519   80762 cri.go:89] found id: ""
	I0612 21:40:38.970548   80762 logs.go:276] 0 containers: []
	W0612 21:40:38.970559   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:38.970567   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:38.970639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:39.005866   80762 cri.go:89] found id: ""
	I0612 21:40:39.005888   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.005896   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:39.005902   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:39.005954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:39.043619   80762 cri.go:89] found id: ""
	I0612 21:40:39.043647   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.043655   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:39.043661   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:39.043709   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:39.081311   80762 cri.go:89] found id: ""
	I0612 21:40:39.081336   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.081344   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:39.081350   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:39.081410   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:39.117326   80762 cri.go:89] found id: ""
	I0612 21:40:39.117358   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.117367   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:39.117372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:39.117423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:39.151785   80762 cri.go:89] found id: ""
	I0612 21:40:39.151819   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.151828   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:39.151835   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:39.151899   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:39.187031   80762 cri.go:89] found id: ""
	I0612 21:40:39.187057   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.187065   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:39.187071   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:39.187119   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:39.222186   80762 cri.go:89] found id: ""
	I0612 21:40:39.222212   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.222223   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:39.222233   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:39.222245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:39.276126   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:39.276164   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:39.291631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:39.291658   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:39.365615   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:39.365641   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:39.365659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:39.442548   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:39.442600   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:41.980840   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:41.996629   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:41.996686   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:42.034158   80762 cri.go:89] found id: ""
	I0612 21:40:42.034186   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.034195   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:42.034202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:42.034274   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:42.070981   80762 cri.go:89] found id: ""
	I0612 21:40:42.071011   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.071021   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:42.071028   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:42.071093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:42.108282   80762 cri.go:89] found id: ""
	I0612 21:40:42.108309   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.108316   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:42.108322   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:42.108369   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:42.146394   80762 cri.go:89] found id: ""
	I0612 21:40:42.146423   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.146434   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:42.146454   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:42.146539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:42.183577   80762 cri.go:89] found id: ""
	I0612 21:40:42.183601   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.183608   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:42.183614   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:42.183662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:42.222069   80762 cri.go:89] found id: ""
	I0612 21:40:42.222100   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.222109   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:42.222115   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:42.222168   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:42.259128   80762 cri.go:89] found id: ""
	I0612 21:40:42.259155   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.259164   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:42.259192   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:42.259282   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:42.296321   80762 cri.go:89] found id: ""
	I0612 21:40:42.296354   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.296368   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:42.296380   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:42.296400   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:42.311098   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:42.311137   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:42.386116   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:42.386144   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:42.386163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:42.467016   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:42.467054   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:42.509143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:42.509180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:40.166288   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.664817   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.665596   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.017043   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.513368   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.816702   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.316890   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.062872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:45.076570   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:45.076658   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:45.114362   80762 cri.go:89] found id: ""
	I0612 21:40:45.114394   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.114404   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:45.114412   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:45.114478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:45.151577   80762 cri.go:89] found id: ""
	I0612 21:40:45.151609   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.151620   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:45.151627   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:45.151689   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:45.188753   80762 cri.go:89] found id: ""
	I0612 21:40:45.188785   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.188795   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:45.188802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:45.188861   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:45.224775   80762 cri.go:89] found id: ""
	I0612 21:40:45.224801   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.224808   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:45.224814   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:45.224873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:45.260440   80762 cri.go:89] found id: ""
	I0612 21:40:45.260472   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.260483   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:45.260490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:45.260547   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:45.297662   80762 cri.go:89] found id: ""
	I0612 21:40:45.297697   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.297709   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:45.297716   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:45.297774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:45.335637   80762 cri.go:89] found id: ""
	I0612 21:40:45.335669   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.335682   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:45.335690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:45.335753   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:45.371523   80762 cri.go:89] found id: ""
	I0612 21:40:45.371580   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.371590   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:45.371599   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:45.371610   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:45.424029   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:45.424065   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:45.440339   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:45.440378   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:45.509504   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:45.509526   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:45.509541   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:45.591857   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:45.591893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:47.166437   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.665544   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.016561   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.511894   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.320090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.816816   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:48.135912   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:48.151271   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:48.151331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:48.192740   80762 cri.go:89] found id: ""
	I0612 21:40:48.192775   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.192788   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:48.192798   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:48.192875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:48.230440   80762 cri.go:89] found id: ""
	I0612 21:40:48.230469   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.230479   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:48.230487   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:48.230549   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:48.270892   80762 cri.go:89] found id: ""
	I0612 21:40:48.270922   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.270933   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:48.270941   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:48.270996   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:48.308555   80762 cri.go:89] found id: ""
	I0612 21:40:48.308580   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.308588   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:48.308594   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:48.308640   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:48.342705   80762 cri.go:89] found id: ""
	I0612 21:40:48.342727   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.342735   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:48.342741   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:48.342788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:48.377418   80762 cri.go:89] found id: ""
	I0612 21:40:48.377450   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.377461   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:48.377468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:48.377535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:48.413092   80762 cri.go:89] found id: ""
	I0612 21:40:48.413126   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.413141   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:48.413149   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:48.413215   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:48.447673   80762 cri.go:89] found id: ""
	I0612 21:40:48.447699   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.447708   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:48.447716   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:48.447728   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:48.488508   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:48.488542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:48.540573   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:48.540608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:48.554735   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:48.554762   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:48.632074   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:48.632098   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:48.632117   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.212336   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:51.227428   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:51.227493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:51.268124   80762 cri.go:89] found id: ""
	I0612 21:40:51.268157   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.268167   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:51.268172   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:51.268220   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:51.305751   80762 cri.go:89] found id: ""
	I0612 21:40:51.305777   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.305785   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:51.305793   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:51.305849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:51.347292   80762 cri.go:89] found id: ""
	I0612 21:40:51.347318   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.347325   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:51.347332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:51.347394   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:51.387476   80762 cri.go:89] found id: ""
	I0612 21:40:51.387501   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.387509   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:51.387515   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:51.387573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:51.431992   80762 cri.go:89] found id: ""
	I0612 21:40:51.432019   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.432029   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:51.432036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:51.432096   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:51.477204   80762 cri.go:89] found id: ""
	I0612 21:40:51.477235   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.477246   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:51.477254   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:51.477346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:51.518449   80762 cri.go:89] found id: ""
	I0612 21:40:51.518477   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.518488   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:51.518502   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:51.518562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:51.554991   80762 cri.go:89] found id: ""
	I0612 21:40:51.555015   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.555024   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:51.555033   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:51.555046   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:51.606732   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:51.606769   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:51.620512   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:51.620538   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:51.697029   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:51.697058   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:51.697074   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.775401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:51.775437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:51.666561   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.166247   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:51.512909   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.012887   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:52.315904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.316764   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.816819   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.318059   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:54.331420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:54.331509   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:54.367886   80762 cri.go:89] found id: ""
	I0612 21:40:54.367926   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.367948   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:54.367959   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:54.368047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:54.403998   80762 cri.go:89] found id: ""
	I0612 21:40:54.404023   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.404034   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:54.404041   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:54.404108   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:54.441449   80762 cri.go:89] found id: ""
	I0612 21:40:54.441480   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.441491   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:54.441498   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:54.441557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:54.476459   80762 cri.go:89] found id: ""
	I0612 21:40:54.476490   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.476500   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:54.476508   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:54.476573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:54.515337   80762 cri.go:89] found id: ""
	I0612 21:40:54.515360   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.515368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:54.515374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:54.515423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:54.551447   80762 cri.go:89] found id: ""
	I0612 21:40:54.551468   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.551475   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:54.551481   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:54.551528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:54.587082   80762 cri.go:89] found id: ""
	I0612 21:40:54.587114   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.587125   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:54.587145   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:54.587225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:54.624211   80762 cri.go:89] found id: ""
	I0612 21:40:54.624235   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.624257   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:54.624268   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:54.624282   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:54.677816   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:54.677848   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:54.693725   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:54.693749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:54.772229   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:54.772255   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:54.772273   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:54.852543   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:54.852578   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:57.397722   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:57.411082   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:57.411145   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:57.449633   80762 cri.go:89] found id: ""
	I0612 21:40:57.449662   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.449673   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:57.449680   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:57.449745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:57.489855   80762 cri.go:89] found id: ""
	I0612 21:40:57.489880   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.489889   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:57.489894   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:57.489952   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:57.528986   80762 cri.go:89] found id: ""
	I0612 21:40:57.529006   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.529014   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:57.529019   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:57.529081   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:57.566701   80762 cri.go:89] found id: ""
	I0612 21:40:57.566730   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.566739   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:57.566746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:57.566800   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:57.601114   80762 cri.go:89] found id: ""
	I0612 21:40:57.601137   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.601145   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:57.601151   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:57.601212   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:57.636120   80762 cri.go:89] found id: ""
	I0612 21:40:57.636145   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.636155   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:57.636163   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:57.636225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:57.676912   80762 cri.go:89] found id: ""
	I0612 21:40:57.676953   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.676960   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:57.676966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:57.677039   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:57.714671   80762 cri.go:89] found id: ""
	I0612 21:40:57.714691   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.714699   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:57.714707   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:57.714720   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:57.770550   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:57.770583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:57.785062   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:57.785093   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:57.853448   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:57.853468   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:57.853480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:56.167768   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.665108   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.014274   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.014535   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.816961   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.817450   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:57.939957   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:57.939999   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.493469   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:00.509746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:00.509819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:00.546582   80762 cri.go:89] found id: ""
	I0612 21:41:00.546610   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.546620   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:00.546629   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:00.546683   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:00.584229   80762 cri.go:89] found id: ""
	I0612 21:41:00.584256   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.584264   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:00.584269   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:00.584337   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:00.618679   80762 cri.go:89] found id: ""
	I0612 21:41:00.618704   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.618712   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:00.618719   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:00.618778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:00.656336   80762 cri.go:89] found id: ""
	I0612 21:41:00.656364   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.656375   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:00.656384   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:00.656457   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:00.694147   80762 cri.go:89] found id: ""
	I0612 21:41:00.694173   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.694182   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:00.694187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:00.694236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:00.733964   80762 cri.go:89] found id: ""
	I0612 21:41:00.733994   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.734006   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:00.734014   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:00.734076   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:00.771245   80762 cri.go:89] found id: ""
	I0612 21:41:00.771274   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.771287   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:00.771293   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:00.771357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:00.809118   80762 cri.go:89] found id: ""
	I0612 21:41:00.809150   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.809158   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:00.809168   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:00.809188   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:00.863479   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:00.863514   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:00.878749   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:00.878783   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:00.955800   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:00.955825   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:00.955844   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:01.037587   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:01.037618   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.666373   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.165203   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.513805   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.017922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.317115   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.817907   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.583063   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:03.597656   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:03.597732   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:03.633233   80762 cri.go:89] found id: ""
	I0612 21:41:03.633263   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.633283   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:03.633291   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:03.633357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:03.679900   80762 cri.go:89] found id: ""
	I0612 21:41:03.679930   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.679941   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:03.679948   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:03.680018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:03.718766   80762 cri.go:89] found id: ""
	I0612 21:41:03.718792   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.718800   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:03.718811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:03.718868   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:03.759404   80762 cri.go:89] found id: ""
	I0612 21:41:03.759429   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.759437   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:03.759443   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:03.759496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:03.794313   80762 cri.go:89] found id: ""
	I0612 21:41:03.794348   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.794357   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:03.794364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:03.794430   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:03.832525   80762 cri.go:89] found id: ""
	I0612 21:41:03.832546   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.832554   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:03.832559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:03.832607   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:03.872841   80762 cri.go:89] found id: ""
	I0612 21:41:03.872868   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.872878   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:03.872885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:03.872945   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:03.912736   80762 cri.go:89] found id: ""
	I0612 21:41:03.912760   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.912770   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:03.912781   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:03.912794   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:03.986645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:03.986672   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:03.986688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:04.066766   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:04.066799   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:04.108219   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:04.108250   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:04.168866   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:04.168911   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:06.684232   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:06.698359   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:06.698443   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:06.735324   80762 cri.go:89] found id: ""
	I0612 21:41:06.735350   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.735359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:06.735364   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:06.735418   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:06.771763   80762 cri.go:89] found id: ""
	I0612 21:41:06.771786   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.771794   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:06.771799   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:06.771850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:06.808151   80762 cri.go:89] found id: ""
	I0612 21:41:06.808179   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.808188   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:06.808193   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:06.808263   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:06.846099   80762 cri.go:89] found id: ""
	I0612 21:41:06.846121   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.846129   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:06.846134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:06.846182   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:06.883559   80762 cri.go:89] found id: ""
	I0612 21:41:06.883584   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.883591   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:06.883597   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:06.883645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:06.920799   80762 cri.go:89] found id: ""
	I0612 21:41:06.920830   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.920841   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:06.920849   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:06.920914   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:06.964441   80762 cri.go:89] found id: ""
	I0612 21:41:06.964472   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.964482   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:06.964490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:06.964561   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:07.000866   80762 cri.go:89] found id: ""
	I0612 21:41:07.000901   80762 logs.go:276] 0 containers: []
	W0612 21:41:07.000912   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:07.000924   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:07.000993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:07.017074   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:07.017121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:07.093873   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:07.093901   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:07.093925   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:07.171258   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:07.171293   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:07.212588   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:07.212624   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:05.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.665354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.665558   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.512109   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.512615   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.513483   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:08.316327   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:10.316456   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.767332   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:09.781184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:09.781249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:09.818966   80762 cri.go:89] found id: ""
	I0612 21:41:09.818999   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.819008   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:09.819014   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:09.819064   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:09.854714   80762 cri.go:89] found id: ""
	I0612 21:41:09.854742   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.854760   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:09.854772   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:09.854823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:09.891229   80762 cri.go:89] found id: ""
	I0612 21:41:09.891257   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.891268   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:09.891274   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:09.891335   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:09.928569   80762 cri.go:89] found id: ""
	I0612 21:41:09.928598   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.928610   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:09.928617   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:09.928680   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:09.963681   80762 cri.go:89] found id: ""
	I0612 21:41:09.963714   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.963725   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:09.963733   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:09.963819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:10.002340   80762 cri.go:89] found id: ""
	I0612 21:41:10.002368   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.002380   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:10.002388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:10.002454   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:10.041935   80762 cri.go:89] found id: ""
	I0612 21:41:10.041961   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.041972   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:10.041979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:10.042047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:10.080541   80762 cri.go:89] found id: ""
	I0612 21:41:10.080571   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.080584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:10.080598   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:10.080614   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:10.140904   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:10.140944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:10.176646   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:10.176688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:10.272147   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:10.272169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:10.272183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:10.352649   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:10.352686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:12.166618   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.665896   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.013177   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.013716   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.317177   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.317391   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.815940   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.896274   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:12.911147   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:12.911231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:12.947628   80762 cri.go:89] found id: ""
	I0612 21:41:12.947651   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.947660   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:12.947665   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:12.947726   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:12.982813   80762 cri.go:89] found id: ""
	I0612 21:41:12.982837   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.982845   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:12.982851   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:12.982898   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:13.021360   80762 cri.go:89] found id: ""
	I0612 21:41:13.021403   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.021412   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:13.021417   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:13.021468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:13.063534   80762 cri.go:89] found id: ""
	I0612 21:41:13.063566   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.063576   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:13.063585   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:13.063666   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:13.098767   80762 cri.go:89] found id: ""
	I0612 21:41:13.098796   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.098807   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:13.098816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:13.098878   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:13.140764   80762 cri.go:89] found id: ""
	I0612 21:41:13.140797   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.140809   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:13.140816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:13.140882   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:13.180356   80762 cri.go:89] found id: ""
	I0612 21:41:13.180400   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.180413   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:13.180420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:13.180482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:13.215198   80762 cri.go:89] found id: ""
	I0612 21:41:13.215227   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.215238   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:13.215249   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:13.215265   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:13.273143   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:13.273182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:13.287861   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:13.287893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:13.366052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:13.366073   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:13.366099   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:13.450980   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:13.451015   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:15.991386   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:16.005618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:16.005699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:16.047253   80762 cri.go:89] found id: ""
	I0612 21:41:16.047281   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.047289   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:16.047295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:16.047356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:16.082860   80762 cri.go:89] found id: ""
	I0612 21:41:16.082886   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.082894   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:16.082899   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:16.082948   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:16.123127   80762 cri.go:89] found id: ""
	I0612 21:41:16.123152   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.123164   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:16.123187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:16.123247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:16.167155   80762 cri.go:89] found id: ""
	I0612 21:41:16.167189   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.167199   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:16.167207   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:16.167276   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:16.204036   80762 cri.go:89] found id: ""
	I0612 21:41:16.204061   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.204071   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:16.204079   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:16.204140   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:16.246672   80762 cri.go:89] found id: ""
	I0612 21:41:16.246701   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.246712   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:16.246721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:16.246785   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:16.286820   80762 cri.go:89] found id: ""
	I0612 21:41:16.286849   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.286857   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:16.286864   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:16.286919   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:16.326622   80762 cri.go:89] found id: ""
	I0612 21:41:16.326649   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.326660   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:16.326667   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:16.326678   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:16.407492   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:16.407525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:16.448207   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:16.448236   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:16.501675   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:16.501714   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:16.518173   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:16.518202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:16.592582   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:17.166163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.167204   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.514405   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.016197   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:18.816596   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:20.817504   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.093054   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:19.107926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:19.108002   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:19.149386   80762 cri.go:89] found id: ""
	I0612 21:41:19.149411   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.149421   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:19.149429   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:19.149493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:19.188092   80762 cri.go:89] found id: ""
	I0612 21:41:19.188120   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.188131   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:19.188139   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:19.188201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:19.227203   80762 cri.go:89] found id: ""
	I0612 21:41:19.227229   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.227235   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:19.227242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:19.227301   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:19.269187   80762 cri.go:89] found id: ""
	I0612 21:41:19.269217   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.269225   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:19.269232   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:19.269294   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:19.305394   80762 cri.go:89] found id: ""
	I0612 21:41:19.305418   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.305425   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:19.305431   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:19.305480   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:19.347794   80762 cri.go:89] found id: ""
	I0612 21:41:19.347825   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.347837   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:19.347846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:19.347907   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:19.384314   80762 cri.go:89] found id: ""
	I0612 21:41:19.384346   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.384364   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:19.384372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:19.384428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:19.421782   80762 cri.go:89] found id: ""
	I0612 21:41:19.421811   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.421822   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:19.421834   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:19.421849   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:19.475969   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:19.476000   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:19.490683   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:19.490710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:19.574492   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:19.574513   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:19.574525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:19.662288   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:19.662324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:22.205404   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:22.220217   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:22.220297   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:22.256870   80762 cri.go:89] found id: ""
	I0612 21:41:22.256901   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.256913   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:22.256921   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:22.256984   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:22.290380   80762 cri.go:89] found id: ""
	I0612 21:41:22.290413   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.290425   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:22.290433   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:22.290497   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:22.324981   80762 cri.go:89] found id: ""
	I0612 21:41:22.325010   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.325019   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:22.325024   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:22.325093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:22.362900   80762 cri.go:89] found id: ""
	I0612 21:41:22.362926   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.362938   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:22.362946   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:22.363008   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:22.399004   80762 cri.go:89] found id: ""
	I0612 21:41:22.399037   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.399048   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:22.399056   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:22.399116   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:22.434306   80762 cri.go:89] found id: ""
	I0612 21:41:22.434341   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.434355   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:22.434365   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:22.434422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:22.479085   80762 cri.go:89] found id: ""
	I0612 21:41:22.479116   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.479129   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:22.479142   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:22.479228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:22.516730   80762 cri.go:89] found id: ""
	I0612 21:41:22.516755   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.516761   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:22.516769   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:22.516780   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:22.570921   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:22.570957   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:22.585409   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:22.585437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:22.667314   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:22.667342   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:22.667360   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:22.743865   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:22.743901   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:21.170060   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.666364   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:21.021658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.512541   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.316232   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.816641   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.282306   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:25.297334   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:25.297407   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:25.336610   80762 cri.go:89] found id: ""
	I0612 21:41:25.336641   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.336654   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:25.336662   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:25.336729   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:25.373307   80762 cri.go:89] found id: ""
	I0612 21:41:25.373338   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.373350   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:25.373358   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:25.373425   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:25.413141   80762 cri.go:89] found id: ""
	I0612 21:41:25.413169   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.413177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:25.413183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:25.413233   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:25.450810   80762 cri.go:89] found id: ""
	I0612 21:41:25.450842   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.450853   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:25.450862   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:25.450924   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:25.487017   80762 cri.go:89] found id: ""
	I0612 21:41:25.487049   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.487255   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:25.487269   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:25.487328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:25.524335   80762 cri.go:89] found id: ""
	I0612 21:41:25.524361   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.524371   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:25.524377   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:25.524428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:25.560394   80762 cri.go:89] found id: ""
	I0612 21:41:25.560421   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.560429   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:25.560435   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:25.560482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:25.599334   80762 cri.go:89] found id: ""
	I0612 21:41:25.599362   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.599372   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:25.599384   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:25.599399   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:25.680153   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:25.680195   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:25.726336   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:25.726377   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:25.777064   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:25.777098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:25.791978   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:25.792007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:25.868860   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:25.667028   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.164920   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.514249   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.012042   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.013658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.316543   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.816789   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.369099   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:28.382729   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:28.382786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:28.423835   80762 cri.go:89] found id: ""
	I0612 21:41:28.423865   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.423875   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:28.423889   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:28.423953   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:28.463098   80762 cri.go:89] found id: ""
	I0612 21:41:28.463127   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.463137   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:28.463144   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:28.463223   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:28.499678   80762 cri.go:89] found id: ""
	I0612 21:41:28.499707   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.499718   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:28.499726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:28.499786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:28.536057   80762 cri.go:89] found id: ""
	I0612 21:41:28.536089   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.536101   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:28.536108   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:28.536180   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:28.571052   80762 cri.go:89] found id: ""
	I0612 21:41:28.571080   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.571090   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:28.571098   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:28.571162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:28.607320   80762 cri.go:89] found id: ""
	I0612 21:41:28.607348   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.607360   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:28.607368   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:28.607427   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:28.643000   80762 cri.go:89] found id: ""
	I0612 21:41:28.643029   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.643037   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:28.643042   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:28.643113   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:28.684134   80762 cri.go:89] found id: ""
	I0612 21:41:28.684164   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.684175   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:28.684186   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:28.684201   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:28.737059   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:28.737092   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:28.753290   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:28.753320   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:28.826964   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:28.826990   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:28.827009   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:28.908874   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:28.908919   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:31.450362   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:31.465831   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:31.465912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:31.507441   80762 cri.go:89] found id: ""
	I0612 21:41:31.507465   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.507474   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:31.507482   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:31.507546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:31.541403   80762 cri.go:89] found id: ""
	I0612 21:41:31.541437   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.541450   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:31.541458   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:31.541524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:31.576367   80762 cri.go:89] found id: ""
	I0612 21:41:31.576393   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.576405   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:31.576412   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:31.576475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:31.615053   80762 cri.go:89] found id: ""
	I0612 21:41:31.615081   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.615091   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:31.615099   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:31.615159   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:31.650460   80762 cri.go:89] found id: ""
	I0612 21:41:31.650495   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.650504   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:31.650511   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:31.650580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:31.690764   80762 cri.go:89] found id: ""
	I0612 21:41:31.690792   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.690803   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:31.690810   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:31.690870   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:31.729785   80762 cri.go:89] found id: ""
	I0612 21:41:31.729809   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.729817   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:31.729822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:31.729881   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:31.772978   80762 cri.go:89] found id: ""
	I0612 21:41:31.773005   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.773013   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:31.773023   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:31.773038   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:31.830451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:31.830484   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:31.846821   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:31.846850   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:31.927289   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:31.927328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:31.927358   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:32.004814   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:32.004852   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:30.165423   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.165695   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.664959   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.515104   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:33.316674   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:35.816686   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.550931   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:34.567559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:34.567618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:34.602234   80762 cri.go:89] found id: ""
	I0612 21:41:34.602260   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.602267   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:34.602273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:34.602318   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:34.639575   80762 cri.go:89] found id: ""
	I0612 21:41:34.639598   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.639605   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:34.639610   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:34.639656   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:34.681325   80762 cri.go:89] found id: ""
	I0612 21:41:34.681360   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.681368   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:34.681374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:34.681478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:34.721405   80762 cri.go:89] found id: ""
	I0612 21:41:34.721432   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.721444   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:34.721451   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:34.721517   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:34.764344   80762 cri.go:89] found id: ""
	I0612 21:41:34.764375   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.764386   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:34.764394   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:34.764459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:34.802083   80762 cri.go:89] found id: ""
	I0612 21:41:34.802107   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.802115   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:34.802121   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:34.802181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:34.843418   80762 cri.go:89] found id: ""
	I0612 21:41:34.843441   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.843450   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:34.843455   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:34.843501   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:34.877803   80762 cri.go:89] found id: ""
	I0612 21:41:34.877832   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.877842   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:34.877852   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:34.877867   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:34.930515   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:34.930545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:34.943705   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:34.943729   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:35.024912   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:35.024931   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:35.024941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:35.109129   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:35.109165   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:37.651788   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:37.667901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:37.667964   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:37.709599   80762 cri.go:89] found id: ""
	I0612 21:41:37.709627   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.709637   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:37.709645   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:37.709700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:37.747150   80762 cri.go:89] found id: ""
	I0612 21:41:37.747191   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.747204   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:37.747212   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:37.747273   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:37.785528   80762 cri.go:89] found id: ""
	I0612 21:41:37.785552   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.785560   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:37.785567   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:37.785614   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:37.822363   80762 cri.go:89] found id: ""
	I0612 21:41:37.822390   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.822400   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:37.822408   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:37.822468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:36.666054   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.165222   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.012397   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.012503   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:38.317132   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:40.821114   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.858285   80762 cri.go:89] found id: ""
	I0612 21:41:37.858395   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.858409   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:37.858416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:37.858466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:37.897500   80762 cri.go:89] found id: ""
	I0612 21:41:37.897542   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.897556   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:37.897574   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:37.897635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:37.937878   80762 cri.go:89] found id: ""
	I0612 21:41:37.937905   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.937921   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:37.937927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:37.937985   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:37.978282   80762 cri.go:89] found id: ""
	I0612 21:41:37.978310   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.978319   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:37.978327   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:37.978341   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:38.055864   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:38.055890   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:38.055903   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:38.135883   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:38.135918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:38.178641   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:38.178668   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:38.236635   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:38.236686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:40.759426   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:40.773526   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:40.773598   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:40.819130   80762 cri.go:89] found id: ""
	I0612 21:41:40.819161   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.819190   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:40.819202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:40.819264   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:40.856176   80762 cri.go:89] found id: ""
	I0612 21:41:40.856204   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.856216   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:40.856224   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:40.856287   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:40.896709   80762 cri.go:89] found id: ""
	I0612 21:41:40.896739   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.896750   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:40.896759   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:40.896820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:40.936431   80762 cri.go:89] found id: ""
	I0612 21:41:40.936457   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.936465   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:40.936471   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:40.936528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:40.979773   80762 cri.go:89] found id: ""
	I0612 21:41:40.979809   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.979820   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:40.979828   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:40.979892   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:41.023885   80762 cri.go:89] found id: ""
	I0612 21:41:41.023910   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.023919   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:41.023925   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:41.024004   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:41.070370   80762 cri.go:89] found id: ""
	I0612 21:41:41.070396   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.070405   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:41.070411   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:41.070467   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:41.115282   80762 cri.go:89] found id: ""
	I0612 21:41:41.115311   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.115321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:41.115332   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:41.115346   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:41.128680   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:41.128710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:41.206100   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:41.206125   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:41.206140   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:41.283499   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:41.283536   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:41.323275   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:41.323307   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:41.166258   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.666600   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:41.013379   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.316659   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.816066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.875750   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:43.890156   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:43.890216   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:43.935105   80762 cri.go:89] found id: ""
	I0612 21:41:43.935135   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.935147   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:43.935155   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:43.935236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:43.980929   80762 cri.go:89] found id: ""
	I0612 21:41:43.980958   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.980967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:43.980973   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:43.981051   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:44.029387   80762 cri.go:89] found id: ""
	I0612 21:41:44.029409   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.029416   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:44.029421   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:44.029483   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:44.067415   80762 cri.go:89] found id: ""
	I0612 21:41:44.067449   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.067460   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:44.067468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:44.067528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:44.105093   80762 cri.go:89] found id: ""
	I0612 21:41:44.105117   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.105125   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:44.105131   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:44.105178   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:44.142647   80762 cri.go:89] found id: ""
	I0612 21:41:44.142680   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.142691   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:44.142699   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:44.142759   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:44.182725   80762 cri.go:89] found id: ""
	I0612 21:41:44.182756   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.182767   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:44.182775   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:44.182836   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:44.219538   80762 cri.go:89] found id: ""
	I0612 21:41:44.219568   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.219580   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:44.219593   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:44.219608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:44.272234   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:44.272267   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:44.285631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:44.285663   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:44.362453   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:44.362470   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:44.362482   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:44.444624   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:44.444656   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.985731   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:46.999749   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:46.999819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:47.035051   80762 cri.go:89] found id: ""
	I0612 21:41:47.035073   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.035080   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:47.035086   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:47.035136   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:47.077929   80762 cri.go:89] found id: ""
	I0612 21:41:47.077964   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.077975   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:47.077982   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:47.078062   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:47.111621   80762 cri.go:89] found id: ""
	I0612 21:41:47.111660   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.111671   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:47.111679   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:47.111744   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:47.150700   80762 cri.go:89] found id: ""
	I0612 21:41:47.150725   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.150733   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:47.150739   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:47.150787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:47.189547   80762 cri.go:89] found id: ""
	I0612 21:41:47.189576   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.189586   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:47.189593   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:47.189660   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:47.229482   80762 cri.go:89] found id: ""
	I0612 21:41:47.229510   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.229522   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:47.229530   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:47.229599   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:47.266798   80762 cri.go:89] found id: ""
	I0612 21:41:47.266826   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.266837   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:47.266844   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:47.266906   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:47.302256   80762 cri.go:89] found id: ""
	I0612 21:41:47.302280   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.302287   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:47.302295   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:47.302306   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:47.354485   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:47.354526   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:47.368689   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:47.368713   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:47.438219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:47.438244   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:47.438257   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:47.514199   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:47.514227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.165541   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:48.664957   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.512922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.012630   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.817136   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.317083   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.056394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:50.069348   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:50.069482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:50.106057   80762 cri.go:89] found id: ""
	I0612 21:41:50.106087   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.106097   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:50.106104   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:50.106162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:50.144532   80762 cri.go:89] found id: ""
	I0612 21:41:50.144560   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.144571   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:50.144579   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:50.144662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:50.184549   80762 cri.go:89] found id: ""
	I0612 21:41:50.184575   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.184583   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:50.184588   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:50.184648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:50.228520   80762 cri.go:89] found id: ""
	I0612 21:41:50.228555   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.228569   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:50.228578   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:50.228649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:50.265697   80762 cri.go:89] found id: ""
	I0612 21:41:50.265726   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.265737   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:50.265744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:50.265818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:50.301353   80762 cri.go:89] found id: ""
	I0612 21:41:50.301393   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.301410   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:50.301416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:50.301477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:50.337273   80762 cri.go:89] found id: ""
	I0612 21:41:50.337298   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.337306   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:50.337313   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:50.337374   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:50.383090   80762 cri.go:89] found id: ""
	I0612 21:41:50.383116   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.383126   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:50.383138   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:50.383151   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:50.454193   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:50.454240   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:50.477753   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:50.477779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:50.544052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:50.544075   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:50.544091   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:50.626441   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:50.626480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:50.666068   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.666287   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.013142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.512869   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.318942   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.816918   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.818011   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:53.168599   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:53.181682   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:53.181764   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:53.228060   80762 cri.go:89] found id: ""
	I0612 21:41:53.228096   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.228107   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:53.228115   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:53.228176   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:53.264867   80762 cri.go:89] found id: ""
	I0612 21:41:53.264890   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.264898   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:53.264903   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:53.264950   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:53.298351   80762 cri.go:89] found id: ""
	I0612 21:41:53.298378   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.298386   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:53.298392   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:53.298448   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:53.335888   80762 cri.go:89] found id: ""
	I0612 21:41:53.335910   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.335917   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:53.335922   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:53.335980   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:53.376131   80762 cri.go:89] found id: ""
	I0612 21:41:53.376166   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.376175   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:53.376183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:53.376240   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:53.412059   80762 cri.go:89] found id: ""
	I0612 21:41:53.412082   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.412088   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:53.412097   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:53.412142   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:53.446776   80762 cri.go:89] found id: ""
	I0612 21:41:53.446805   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.446816   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:53.446823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:53.446894   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:53.482411   80762 cri.go:89] found id: ""
	I0612 21:41:53.482433   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.482441   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:53.482449   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:53.482460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:53.522419   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:53.522448   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:53.573107   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:53.573141   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:53.587101   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:53.587147   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:53.665631   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:53.665660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:53.665675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.242482   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:56.255606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:56.255682   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:56.290837   80762 cri.go:89] found id: ""
	I0612 21:41:56.290861   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.290872   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:56.290880   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:56.290938   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:56.325429   80762 cri.go:89] found id: ""
	I0612 21:41:56.325458   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.325466   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:56.325471   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:56.325534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:56.359809   80762 cri.go:89] found id: ""
	I0612 21:41:56.359835   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.359845   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:56.359852   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:56.359912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:56.397775   80762 cri.go:89] found id: ""
	I0612 21:41:56.397803   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.397815   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:56.397823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:56.397884   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:56.433917   80762 cri.go:89] found id: ""
	I0612 21:41:56.433945   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.433956   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:56.433963   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:56.434028   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:56.467390   80762 cri.go:89] found id: ""
	I0612 21:41:56.467419   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.467429   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:56.467438   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:56.467496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:56.504014   80762 cri.go:89] found id: ""
	I0612 21:41:56.504048   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.504059   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:56.504067   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:56.504138   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:56.544157   80762 cri.go:89] found id: ""
	I0612 21:41:56.544187   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.544198   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:56.544209   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:56.544224   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:56.595431   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:56.595462   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:56.608897   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:56.608936   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:56.682706   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:56.682735   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:56.682749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.762598   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:56.762634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:55.166152   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:57.167363   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.666265   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.514832   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:58.515091   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.317285   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.818345   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.302898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:59.317901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:59.317976   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:59.360136   80762 cri.go:89] found id: ""
	I0612 21:41:59.360164   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.360174   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:59.360181   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:59.360249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:59.397205   80762 cri.go:89] found id: ""
	I0612 21:41:59.397233   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.397244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:59.397252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:59.397312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:59.437063   80762 cri.go:89] found id: ""
	I0612 21:41:59.437093   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.437105   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:59.437113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:59.437172   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:59.472800   80762 cri.go:89] found id: ""
	I0612 21:41:59.472827   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.472835   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:59.472843   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:59.472904   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:59.509452   80762 cri.go:89] found id: ""
	I0612 21:41:59.509474   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.509482   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:59.509487   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:59.509534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:59.546121   80762 cri.go:89] found id: ""
	I0612 21:41:59.546151   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.546162   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:59.546170   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:59.546231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:59.582983   80762 cri.go:89] found id: ""
	I0612 21:41:59.583007   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.583014   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:59.583020   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:59.583065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:59.621110   80762 cri.go:89] found id: ""
	I0612 21:41:59.621148   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.621160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:59.621171   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:59.621187   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:59.673113   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:59.673143   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:59.688106   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:59.688171   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:59.767653   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:59.767678   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:59.767692   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:59.848467   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:59.848507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.391324   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:02.406543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:02.406621   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:02.442225   80762 cri.go:89] found id: ""
	I0612 21:42:02.442255   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.442265   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:02.442273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:02.442341   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:02.479445   80762 cri.go:89] found id: ""
	I0612 21:42:02.479476   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.479487   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:02.479495   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:02.479557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:02.517654   80762 cri.go:89] found id: ""
	I0612 21:42:02.517685   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.517697   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:02.517705   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:02.517775   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:02.562743   80762 cri.go:89] found id: ""
	I0612 21:42:02.562777   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.562788   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:02.562807   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:02.562873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:02.597775   80762 cri.go:89] found id: ""
	I0612 21:42:02.597805   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.597816   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:02.597824   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:02.597886   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:02.633869   80762 cri.go:89] found id: ""
	I0612 21:42:02.633901   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.633913   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:02.633921   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:02.633979   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:02.671931   80762 cri.go:89] found id: ""
	I0612 21:42:02.671962   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.671974   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:02.671982   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:02.672044   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:02.709162   80762 cri.go:89] found id: ""
	I0612 21:42:02.709192   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.709204   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:02.709214   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:02.709228   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:02.722937   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:02.722967   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:02.798249   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:02.798275   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:02.798292   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:02.165664   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.012458   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:03.513414   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.317221   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.818062   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:02.876339   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:02.876376   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.913080   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:02.913106   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.464433   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:05.478249   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:05.478326   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:05.520742   80762 cri.go:89] found id: ""
	I0612 21:42:05.520765   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.520772   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:05.520778   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:05.520840   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:05.564864   80762 cri.go:89] found id: ""
	I0612 21:42:05.564896   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.564904   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:05.564911   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:05.564956   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:05.602917   80762 cri.go:89] found id: ""
	I0612 21:42:05.602942   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.602950   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:05.602956   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:05.603040   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:05.645073   80762 cri.go:89] found id: ""
	I0612 21:42:05.645104   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.645112   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:05.645119   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:05.645166   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:05.684133   80762 cri.go:89] found id: ""
	I0612 21:42:05.684165   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.684176   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:05.684184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:05.684249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:05.721461   80762 cri.go:89] found id: ""
	I0612 21:42:05.721489   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.721499   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:05.721506   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:05.721573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:05.756710   80762 cri.go:89] found id: ""
	I0612 21:42:05.756744   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.756755   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:05.756763   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:05.756814   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:05.792182   80762 cri.go:89] found id: ""
	I0612 21:42:05.792210   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.792220   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:05.792230   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:05.792245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:05.836597   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:05.836632   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.888704   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:05.888742   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:05.903354   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:05.903387   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:05.976146   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:05.976169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:05.976183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:06.664789   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.666830   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.013885   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.512997   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:09.316398   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.317014   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.559612   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:08.573592   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:08.573648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:08.613347   80762 cri.go:89] found id: ""
	I0612 21:42:08.613373   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.613381   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:08.613387   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:08.613449   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:08.650606   80762 cri.go:89] found id: ""
	I0612 21:42:08.650634   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.650643   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:08.650648   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:08.650692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:08.687056   80762 cri.go:89] found id: ""
	I0612 21:42:08.687087   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.687097   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:08.687105   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:08.687191   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:08.723112   80762 cri.go:89] found id: ""
	I0612 21:42:08.723138   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.723146   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:08.723161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:08.723238   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:08.764772   80762 cri.go:89] found id: ""
	I0612 21:42:08.764801   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.764812   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:08.764820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:08.764888   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:08.801914   80762 cri.go:89] found id: ""
	I0612 21:42:08.801944   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.801954   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:08.801962   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:08.802025   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:08.837991   80762 cri.go:89] found id: ""
	I0612 21:42:08.838017   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.838025   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:08.838030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:08.838084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:08.874977   80762 cri.go:89] found id: ""
	I0612 21:42:08.875016   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.875027   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:08.875039   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:08.875058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:08.931628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:08.931659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:08.946763   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:08.946791   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:09.028039   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:09.028061   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:09.028079   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:09.104350   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:09.104406   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.645114   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:11.659382   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:11.659455   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:11.702205   80762 cri.go:89] found id: ""
	I0612 21:42:11.702236   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.702246   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:11.702254   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:11.702309   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:11.748328   80762 cri.go:89] found id: ""
	I0612 21:42:11.748350   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.748357   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:11.748362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:11.748408   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:11.788980   80762 cri.go:89] found id: ""
	I0612 21:42:11.789009   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.789020   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:11.789027   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:11.789083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:11.829886   80762 cri.go:89] found id: ""
	I0612 21:42:11.829910   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.829920   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:11.829928   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:11.830006   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:11.870088   80762 cri.go:89] found id: ""
	I0612 21:42:11.870120   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.870131   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:11.870138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:11.870201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:11.907862   80762 cri.go:89] found id: ""
	I0612 21:42:11.907893   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.907905   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:11.907913   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:11.907973   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:11.947773   80762 cri.go:89] found id: ""
	I0612 21:42:11.947798   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.947808   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:11.947816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:11.947876   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:11.987806   80762 cri.go:89] found id: ""
	I0612 21:42:11.987837   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.987848   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:11.987859   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:11.987878   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:12.043451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:12.043481   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:12.057946   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:12.057980   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:12.134265   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:12.134298   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:12.134310   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:12.221276   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:12.221315   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.165305   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.165846   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.012728   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512292   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512327   80243 pod_ready.go:81] duration metric: took 4m0.006424182s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:13.512336   80243 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0612 21:42:13.512343   80243 pod_ready.go:38] duration metric: took 4m5.595554955s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:13.512359   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:42:13.512383   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:13.512428   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:13.571855   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:13.571882   80243 cri.go:89] found id: ""
	I0612 21:42:13.571892   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:13.571942   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.576505   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:13.576557   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:13.614768   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:13.614792   80243 cri.go:89] found id: ""
	I0612 21:42:13.614799   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:13.614847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.619276   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:13.619342   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:13.662832   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:13.662856   80243 cri.go:89] found id: ""
	I0612 21:42:13.662866   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:13.662931   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.667956   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:13.668031   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:13.710456   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.710479   80243 cri.go:89] found id: ""
	I0612 21:42:13.710487   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:13.710540   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.715411   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:13.715480   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:13.759924   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.759952   80243 cri.go:89] found id: ""
	I0612 21:42:13.759965   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:13.760027   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.764854   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:13.764919   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:13.804802   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:13.804826   80243 cri.go:89] found id: ""
	I0612 21:42:13.804834   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:13.804891   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.809421   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:13.809489   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:13.846580   80243 cri.go:89] found id: ""
	I0612 21:42:13.846615   80243 logs.go:276] 0 containers: []
	W0612 21:42:13.846625   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:13.846633   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:13.846685   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:13.893480   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.893504   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:13.893510   80243 cri.go:89] found id: ""
	I0612 21:42:13.893523   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:13.893571   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.898530   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.905072   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:13.905100   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.942165   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:13.942195   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.981852   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:13.981882   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:14.018431   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:14.018457   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:14.067616   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:14.067645   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:14.082853   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:14.082886   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:14.220156   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:14.220188   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:14.274395   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:14.274430   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:14.319087   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:14.319116   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:14.834792   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:14.834831   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:14.893213   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:14.893252   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:14.957423   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:14.957466   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:15.013756   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:15.013803   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.318558   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:15.318904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:14.760949   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:14.775242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:14.775356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:14.818818   80762 cri.go:89] found id: ""
	I0612 21:42:14.818847   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.818856   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:14.818863   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:14.818931   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:14.859106   80762 cri.go:89] found id: ""
	I0612 21:42:14.859146   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.859157   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:14.859164   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:14.859247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:14.894993   80762 cri.go:89] found id: ""
	I0612 21:42:14.895016   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.895026   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:14.895037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:14.895087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:14.943534   80762 cri.go:89] found id: ""
	I0612 21:42:14.943561   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.943572   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:14.943579   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:14.943645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:14.985243   80762 cri.go:89] found id: ""
	I0612 21:42:14.985267   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.985274   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:14.985280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:14.985328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:15.029253   80762 cri.go:89] found id: ""
	I0612 21:42:15.029286   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.029297   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:15.029305   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:15.029371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:15.063471   80762 cri.go:89] found id: ""
	I0612 21:42:15.063499   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.063510   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:15.063517   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:15.063580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:15.101152   80762 cri.go:89] found id: ""
	I0612 21:42:15.101181   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.101201   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:15.101212   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:15.101227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:15.178398   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:15.178416   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:15.178429   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:15.255420   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:15.255468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:15.295492   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:15.295519   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:15.345010   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:15.345051   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:15.166546   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.666141   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:19.672626   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.561453   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.579672   80243 api_server.go:72] duration metric: took 4m17.376220984s to wait for apiserver process to appear ...
	I0612 21:42:17.579702   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:42:17.579741   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.579787   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.620290   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:17.620318   80243 cri.go:89] found id: ""
	I0612 21:42:17.620325   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:17.620387   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.624598   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.624658   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.665957   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:17.665985   80243 cri.go:89] found id: ""
	I0612 21:42:17.665995   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:17.666056   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.671143   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.671222   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:17.717377   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.717396   80243 cri.go:89] found id: ""
	I0612 21:42:17.717404   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:17.717459   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.721710   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:17.721774   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:17.762712   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:17.762739   80243 cri.go:89] found id: ""
	I0612 21:42:17.762749   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:17.762807   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.767258   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:17.767329   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:17.803905   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:17.803956   80243 cri.go:89] found id: ""
	I0612 21:42:17.803969   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:17.804034   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.808260   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:17.808323   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:17.847402   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:17.847432   80243 cri.go:89] found id: ""
	I0612 21:42:17.847441   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:17.847502   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.851685   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:17.851757   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:17.897026   80243 cri.go:89] found id: ""
	I0612 21:42:17.897051   80243 logs.go:276] 0 containers: []
	W0612 21:42:17.897059   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:17.897065   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:17.897122   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:17.953764   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:17.953793   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:17.953799   80243 cri.go:89] found id: ""
	I0612 21:42:17.953808   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:17.953875   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.959822   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.965103   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:17.965127   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:18.089205   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:18.089229   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:18.153823   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:18.153876   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:18.198010   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.198037   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.255380   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.255410   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.271692   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:18.271725   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:18.318018   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:18.318049   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:18.379352   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:18.379386   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:18.437854   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:18.437884   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:18.487618   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.487650   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.934735   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.934784   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:18.983010   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:18.983050   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:19.043569   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:19.043605   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.819422   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:20.315423   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.862640   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.879256   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.879333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.918910   80762 cri.go:89] found id: ""
	I0612 21:42:17.918940   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.918951   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:17.918958   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.919018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.959701   80762 cri.go:89] found id: ""
	I0612 21:42:17.959726   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.959734   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:17.959739   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.959820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:18.005096   80762 cri.go:89] found id: ""
	I0612 21:42:18.005125   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.005142   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:18.005150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:18.005211   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:18.046877   80762 cri.go:89] found id: ""
	I0612 21:42:18.046907   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.046919   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:18.046927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:18.046992   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:18.087907   80762 cri.go:89] found id: ""
	I0612 21:42:18.087934   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.087946   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:18.087953   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:18.088016   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:18.139423   80762 cri.go:89] found id: ""
	I0612 21:42:18.139453   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.139464   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:18.139473   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:18.139535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:18.180433   80762 cri.go:89] found id: ""
	I0612 21:42:18.180459   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.180469   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:18.180476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:18.180537   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:18.220966   80762 cri.go:89] found id: ""
	I0612 21:42:18.220996   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.221005   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:18.221015   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.221033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.276006   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.276031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.290975   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:18.291026   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:18.369318   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:18.369345   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.369359   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.451336   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.451380   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.016353   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:21.030544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.030611   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.072558   80762 cri.go:89] found id: ""
	I0612 21:42:21.072583   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.072591   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:21.072596   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.072649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.106320   80762 cri.go:89] found id: ""
	I0612 21:42:21.106352   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.106364   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:21.106372   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.106431   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.139155   80762 cri.go:89] found id: ""
	I0612 21:42:21.139201   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.139212   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:21.139220   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.139285   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.178731   80762 cri.go:89] found id: ""
	I0612 21:42:21.178762   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.178772   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:21.178779   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.178838   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.213606   80762 cri.go:89] found id: ""
	I0612 21:42:21.213635   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.213645   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:21.213652   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.213707   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.250966   80762 cri.go:89] found id: ""
	I0612 21:42:21.250993   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.251009   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:21.251017   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.251084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.289434   80762 cri.go:89] found id: ""
	I0612 21:42:21.289457   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.289465   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.289474   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:21.289520   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:21.329028   80762 cri.go:89] found id: ""
	I0612 21:42:21.329058   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.329069   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:21.329080   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:21.329098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:21.342621   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:21.342648   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:21.418742   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:21.418766   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:21.418779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:21.493909   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:21.493944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.534693   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:21.534723   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.165337   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.166122   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:21.581443   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:42:21.586756   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:42:21.587670   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:42:21.587691   80243 api_server.go:131] duration metric: took 4.007982669s to wait for apiserver health ...
	I0612 21:42:21.587699   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:42:21.587720   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.587761   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.627942   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:21.627965   80243 cri.go:89] found id: ""
	I0612 21:42:21.627974   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:21.628036   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.632308   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.632380   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.674453   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:21.674474   80243 cri.go:89] found id: ""
	I0612 21:42:21.674482   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:21.674539   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.679303   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.679376   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.717454   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:21.717483   80243 cri.go:89] found id: ""
	I0612 21:42:21.717492   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:21.717555   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.722113   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.722176   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.758752   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:21.758780   80243 cri.go:89] found id: ""
	I0612 21:42:21.758790   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:21.758847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.763397   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.763465   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.802552   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:21.802574   80243 cri.go:89] found id: ""
	I0612 21:42:21.802583   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:21.802641   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.807570   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.807633   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.855093   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:21.855118   80243 cri.go:89] found id: ""
	I0612 21:42:21.855128   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:21.855212   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.860163   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.860231   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.907934   80243 cri.go:89] found id: ""
	I0612 21:42:21.907960   80243 logs.go:276] 0 containers: []
	W0612 21:42:21.907969   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.907977   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:21.908046   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:21.950085   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:21.950114   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:21.950120   80243 cri.go:89] found id: ""
	I0612 21:42:21.950128   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:21.950186   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.955633   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.960017   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:21.960038   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:22.015659   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:22.015708   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:22.074063   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:22.074093   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:22.113545   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:22.113581   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:22.152550   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:22.152583   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:22.556816   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:22.556856   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:22.602506   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:22.602542   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.655545   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:22.655577   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:22.775731   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:22.775775   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:22.827447   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:22.827476   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:22.864866   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:22.864898   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:22.903885   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:22.903912   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:22.947166   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:22.947214   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:25.472711   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:42:25.472743   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.472750   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.472755   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.472761   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.472765   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.472770   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.472777   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.472783   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.472794   80243 system_pods.go:74] duration metric: took 3.885088008s to wait for pod list to return data ...
	I0612 21:42:25.472803   80243 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:42:25.475046   80243 default_sa.go:45] found service account: "default"
	I0612 21:42:25.475072   80243 default_sa.go:55] duration metric: took 2.260179ms for default service account to be created ...
	I0612 21:42:25.475082   80243 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:42:25.479903   80243 system_pods.go:86] 8 kube-system pods found
	I0612 21:42:25.479925   80243 system_pods.go:89] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.479931   80243 system_pods.go:89] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.479935   80243 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.479940   80243 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.479944   80243 system_pods.go:89] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.479950   80243 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.479959   80243 system_pods.go:89] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.479969   80243 system_pods.go:89] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.479979   80243 system_pods.go:126] duration metric: took 4.890624ms to wait for k8s-apps to be running ...
	I0612 21:42:25.479990   80243 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:42:25.480037   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:25.496529   80243 system_svc.go:56] duration metric: took 16.534285ms WaitForService to wait for kubelet
	I0612 21:42:25.496549   80243 kubeadm.go:576] duration metric: took 4m25.293104149s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:42:25.496565   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:42:25.499277   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:42:25.499293   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:42:25.499304   80243 node_conditions.go:105] duration metric: took 2.734965ms to run NodePressure ...
	I0612 21:42:25.499314   80243 start.go:240] waiting for startup goroutines ...
	I0612 21:42:25.499320   80243 start.go:245] waiting for cluster config update ...
	I0612 21:42:25.499339   80243 start.go:254] writing updated cluster config ...
	I0612 21:42:25.499628   80243 ssh_runner.go:195] Run: rm -f paused
	I0612 21:42:25.547780   80243 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:42:25.549693   80243 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-376087" cluster and "default" namespace by default
	I0612 21:42:22.317078   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.317826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:26.818102   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.086466   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:24.101820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:24.101877   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:24.145732   80762 cri.go:89] found id: ""
	I0612 21:42:24.145757   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.145767   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:24.145774   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:24.145832   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:24.182765   80762 cri.go:89] found id: ""
	I0612 21:42:24.182788   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.182795   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:24.182801   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:24.182889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:24.235093   80762 cri.go:89] found id: ""
	I0612 21:42:24.235121   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.235129   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:24.235134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:24.235208   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:24.269788   80762 cri.go:89] found id: ""
	I0612 21:42:24.269809   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.269816   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:24.269822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:24.269867   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:24.306594   80762 cri.go:89] found id: ""
	I0612 21:42:24.306620   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.306628   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:24.306634   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:24.306693   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:24.343766   80762 cri.go:89] found id: ""
	I0612 21:42:24.343786   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.343795   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:24.343802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:24.343858   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:24.384417   80762 cri.go:89] found id: ""
	I0612 21:42:24.384447   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.384457   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:24.384464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:24.384524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:24.424935   80762 cri.go:89] found id: ""
	I0612 21:42:24.424958   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.424965   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:24.424974   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:24.424988   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:24.499737   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:24.499771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:24.537631   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:24.537667   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:24.593743   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:24.593779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:24.608078   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:24.608107   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:24.679729   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.180828   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:27.195484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:27.195552   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:27.235725   80762 cri.go:89] found id: ""
	I0612 21:42:27.235750   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.235761   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:27.235768   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:27.235816   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:27.279763   80762 cri.go:89] found id: ""
	I0612 21:42:27.279795   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.279806   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:27.279814   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:27.279875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:27.320510   80762 cri.go:89] found id: ""
	I0612 21:42:27.320534   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.320543   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:27.320554   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:27.320641   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:27.359195   80762 cri.go:89] found id: ""
	I0612 21:42:27.359227   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.359239   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:27.359247   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:27.359312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:27.394977   80762 cri.go:89] found id: ""
	I0612 21:42:27.395004   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.395015   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:27.395033   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:27.395099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:27.431905   80762 cri.go:89] found id: ""
	I0612 21:42:27.431925   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.431933   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:27.431945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:27.431990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:27.469929   80762 cri.go:89] found id: ""
	I0612 21:42:27.469954   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.469961   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:27.469967   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:27.470024   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:27.505128   80762 cri.go:89] found id: ""
	I0612 21:42:27.505153   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.505160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:27.505169   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:27.505180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:27.556739   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:27.556771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:27.572730   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:27.572757   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:27.646797   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.646819   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:27.646836   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:27.726554   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:27.726599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:26.665496   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.166323   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.316302   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:31.316334   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:30.268770   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:30.282575   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:30.282635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:30.321243   80762 cri.go:89] found id: ""
	I0612 21:42:30.321276   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.321288   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:30.321295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:30.321342   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:30.359403   80762 cri.go:89] found id: ""
	I0612 21:42:30.359432   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.359443   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:30.359451   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:30.359505   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:30.395967   80762 cri.go:89] found id: ""
	I0612 21:42:30.396006   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.396015   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:30.396028   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:30.396087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:30.438093   80762 cri.go:89] found id: ""
	I0612 21:42:30.438123   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.438132   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:30.438138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:30.438192   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:30.476859   80762 cri.go:89] found id: ""
	I0612 21:42:30.476888   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.476898   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:30.476905   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:30.476968   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:30.512998   80762 cri.go:89] found id: ""
	I0612 21:42:30.513026   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.513037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:30.513045   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:30.513106   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:30.548822   80762 cri.go:89] found id: ""
	I0612 21:42:30.548847   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.548855   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:30.548861   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:30.548908   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:30.584385   80762 cri.go:89] found id: ""
	I0612 21:42:30.584417   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.584426   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:30.584439   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:30.584454   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:30.685995   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:30.686019   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:30.686030   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:30.778789   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:30.778827   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:30.819467   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:30.819511   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:30.872563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:30.872599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:31.659828   80404 pod_ready.go:81] duration metric: took 4m0.000909177s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:31.659857   80404 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:42:31.659875   80404 pod_ready.go:38] duration metric: took 4m13.021158077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:31.659904   80404 kubeadm.go:591] duration metric: took 4m20.257268424s to restartPrimaryControlPlane
	W0612 21:42:31.659968   80404 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:31.660002   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:33.316457   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:35.316525   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:33.387831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:33.401663   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:33.401740   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:33.439690   80762 cri.go:89] found id: ""
	I0612 21:42:33.439723   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.439735   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:33.439743   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:33.439805   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:33.480330   80762 cri.go:89] found id: ""
	I0612 21:42:33.480357   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.480365   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:33.480371   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:33.480422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:33.520367   80762 cri.go:89] found id: ""
	I0612 21:42:33.520396   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.520407   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:33.520415   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:33.520476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:33.556859   80762 cri.go:89] found id: ""
	I0612 21:42:33.556892   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.556904   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:33.556911   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:33.556963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:33.595982   80762 cri.go:89] found id: ""
	I0612 21:42:33.596014   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.596024   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:33.596030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:33.596091   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:33.630942   80762 cri.go:89] found id: ""
	I0612 21:42:33.630974   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.630986   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:33.630994   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:33.631055   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:33.671649   80762 cri.go:89] found id: ""
	I0612 21:42:33.671676   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.671684   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:33.671690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:33.671734   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:33.716664   80762 cri.go:89] found id: ""
	I0612 21:42:33.716690   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.716700   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:33.716711   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:33.716726   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:33.734168   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:33.734198   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:33.826469   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:33.826491   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:33.826507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:33.915109   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:33.915142   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:33.957969   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:33.958007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:36.515258   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:36.529603   80762 kubeadm.go:591] duration metric: took 4m4.234271001s to restartPrimaryControlPlane
	W0612 21:42:36.529688   80762 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:36.529719   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:37.316720   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:39.317633   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.816783   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.545629   80762 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.01588354s)
	I0612 21:42:41.545734   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:41.561025   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:42:41.572788   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:42:41.583027   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:42:41.583052   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:42:41.583095   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:42:41.593433   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:42:41.593502   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:42:41.603944   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:42:41.613382   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:42:41.613432   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:42:41.622874   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.632270   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:42:41.632370   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.642072   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:42:41.652120   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:42:41.652194   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:42:41.662684   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:42:41.894903   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:42:43.817122   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:45.817164   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:47.817201   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:50.316134   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:52.317090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:54.318066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:56.816196   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:58.817948   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:01.316826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:03.728120   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.068094257s)
	I0612 21:43:03.728183   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:03.744990   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:03.755365   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:03.765154   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:03.765181   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:03.765226   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:03.775246   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:03.775304   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:03.785389   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:03.794999   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:03.795046   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:03.804771   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.814137   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:03.814187   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.824449   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:03.833631   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:03.833687   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:03.843203   80404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:03.895827   80404 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:03.895927   80404 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:04.040495   80404 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:04.040666   80404 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:04.040822   80404 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:04.252894   80404 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:04.254835   80404 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:04.254952   80404 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:04.255060   80404 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:04.255219   80404 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:04.255296   80404 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:04.255399   80404 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:04.255490   80404 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:04.255589   80404 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:04.255692   80404 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:04.255794   80404 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:04.255885   80404 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:04.255923   80404 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:04.255978   80404 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:04.460505   80404 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:04.640215   80404 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:04.722455   80404 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:04.862670   80404 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:05.112478   80404 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:05.113163   80404 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:05.115573   80404 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:03.817386   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:06.317207   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:05.117650   80404 out.go:204]   - Booting up control plane ...
	I0612 21:43:05.117758   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:05.117887   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:05.119410   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:05.139412   80404 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:05.139504   80404 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:05.139575   80404 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:05.268539   80404 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:05.268636   80404 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:43:05.771267   80404 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.898809ms
	I0612 21:43:05.771364   80404 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:43:11.274484   80404 kubeadm.go:309] [api-check] The API server is healthy after 5.503111655s
	I0612 21:43:11.291273   80404 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:43:11.319349   80404 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:43:11.357447   80404 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:43:11.357709   80404 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-591460 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:43:11.368936   80404 kubeadm.go:309] [bootstrap-token] Using token: 0iiegq.ujvrnknfmyshffxu
	I0612 21:43:08.816875   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:10.817031   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:11.370411   80404 out.go:204]   - Configuring RBAC rules ...
	I0612 21:43:11.370567   80404 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:43:11.375891   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:43:11.388345   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:43:11.392726   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:43:11.396867   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:43:11.401212   80404 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:43:11.683506   80404 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:43:12.114832   80404 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:43:12.683696   80404 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:43:12.683724   80404 kubeadm.go:309] 
	I0612 21:43:12.683811   80404 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:43:12.683823   80404 kubeadm.go:309] 
	I0612 21:43:12.683938   80404 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:43:12.683958   80404 kubeadm.go:309] 
	I0612 21:43:12.684002   80404 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:43:12.684070   80404 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:43:12.684129   80404 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:43:12.684146   80404 kubeadm.go:309] 
	I0612 21:43:12.684232   80404 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:43:12.684247   80404 kubeadm.go:309] 
	I0612 21:43:12.684317   80404 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:43:12.684330   80404 kubeadm.go:309] 
	I0612 21:43:12.684398   80404 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:43:12.684502   80404 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:43:12.684595   80404 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:43:12.684604   80404 kubeadm.go:309] 
	I0612 21:43:12.684700   80404 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:43:12.684807   80404 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:43:12.684816   80404 kubeadm.go:309] 
	I0612 21:43:12.684915   80404 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685061   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:43:12.685105   80404 kubeadm.go:309] 	--control-plane 
	I0612 21:43:12.685116   80404 kubeadm.go:309] 
	I0612 21:43:12.685237   80404 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:43:12.685248   80404 kubeadm.go:309] 
	I0612 21:43:12.685352   80404 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685509   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:43:12.685622   80404 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:43:12.685831   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:43:12.685848   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:43:12.687835   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:43:12.689100   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:43:12.700384   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:43:12.720228   80404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:43:12.720305   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.720330   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-591460 minikube.k8s.io/updated_at=2024_06_12T21_43_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=embed-certs-591460 minikube.k8s.io/primary=true
	I0612 21:43:12.751866   80404 ops.go:34] apiserver oom_adj: -16
	I0612 21:43:12.927644   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.428393   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.928221   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:14.428286   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.817125   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:15.316899   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:14.928273   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.428431   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.927968   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.428202   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.927882   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.428544   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.927844   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.428385   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.928105   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:19.428421   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.317080   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.317419   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:21.816670   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.928638   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.428310   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.928565   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.428377   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.928158   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.428356   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.927863   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.427955   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.928226   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.427823   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.928404   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.428367   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.514417   80404 kubeadm.go:1107] duration metric: took 12.794169259s to wait for elevateKubeSystemPrivileges
	W0612 21:43:25.514460   80404 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:43:25.514470   80404 kubeadm.go:393] duration metric: took 5m14.162212832s to StartCluster
	I0612 21:43:25.514490   80404 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.514576   80404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:43:25.518597   80404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.518811   80404 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:43:25.520571   80404 out.go:177] * Verifying Kubernetes components...
	I0612 21:43:25.518903   80404 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:43:25.519030   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:43:25.521967   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:43:25.522001   80404 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-591460"
	I0612 21:43:25.522043   80404 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-591460"
	W0612 21:43:25.522056   80404 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:43:25.522053   80404 addons.go:69] Setting default-storageclass=true in profile "embed-certs-591460"
	I0612 21:43:25.522089   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522100   80404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-591460"
	I0612 21:43:25.522089   80404 addons.go:69] Setting metrics-server=true in profile "embed-certs-591460"
	I0612 21:43:25.522158   80404 addons.go:234] Setting addon metrics-server=true in "embed-certs-591460"
	W0612 21:43:25.522170   80404 addons.go:243] addon metrics-server should already be in state true
	I0612 21:43:25.522196   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522502   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522509   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522532   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522535   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522585   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522611   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.538989   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0612 21:43:25.539032   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0612 21:43:25.539591   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.539592   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.540199   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540222   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540293   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540323   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540610   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.540736   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.541265   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541281   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541312   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.541431   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.542393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0612 21:43:25.543042   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.543604   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.543643   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.543997   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.544208   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.547823   80404 addons.go:234] Setting addon default-storageclass=true in "embed-certs-591460"
	W0612 21:43:25.547849   80404 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:43:25.547878   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.548237   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.548272   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.558486   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46589
	I0612 21:43:25.558934   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.559936   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.559967   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.560387   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.560600   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.560728   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0612 21:43:25.561116   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.561595   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.561610   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.561928   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.562198   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.562832   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565065   80404 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:43:25.563946   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0612 21:43:25.566521   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:43:25.566535   80404 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:43:25.566582   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.568114   80404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:43:24.316660   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:25.810857   80157 pod_ready.go:81] duration metric: took 4m0.000926725s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	E0612 21:43:25.810888   80157 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:43:25.810936   80157 pod_ready.go:38] duration metric: took 4m14.539121336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.810971   80157 kubeadm.go:591] duration metric: took 4m21.56451584s to restartPrimaryControlPlane
	W0612 21:43:25.811042   80157 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:43:25.811074   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:43:25.567032   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.569772   80404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:25.569794   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:43:25.569812   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.570271   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.570291   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.570363   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.570698   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.571498   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.571514   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.571539   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.571691   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.571861   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.572032   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.572851   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.572894   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.573962   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574403   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.574429   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574762   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.574974   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.575164   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.575464   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.589637   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0612 21:43:25.590155   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.591035   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.591059   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.591596   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.591845   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.593885   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.594095   80404 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:25.594112   80404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:43:25.594131   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.597769   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.598379   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598434   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.598635   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.598766   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.598860   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.762134   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:43:25.818663   80404 node_ready.go:35] waiting up to 6m0s for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830753   80404 node_ready.go:49] node "embed-certs-591460" has status "Ready":"True"
	I0612 21:43:25.830780   80404 node_ready.go:38] duration metric: took 12.086962ms for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830792   80404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.841084   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:25.929395   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:43:25.929427   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:43:26.001489   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:26.016234   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:43:26.016275   80404 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:43:26.030851   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:26.062707   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:26.062741   80404 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:43:26.157461   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:27.281342   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.279809959s)
	I0612 21:43:27.281364   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.250478112s)
	I0612 21:43:27.281392   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281405   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281408   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281420   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281712   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281730   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281739   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281748   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281861   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281916   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281933   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281942   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.283567   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283582   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283592   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283597   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.283728   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283740   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.324600   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.324625   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.324937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.324941   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.324965   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.366096   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:27.366126   80404 pod_ready.go:81] duration metric: took 1.52501871s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.366139   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.530900   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.373391416s)
	I0612 21:43:27.530973   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.530987   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.531382   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.531399   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.531406   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.531419   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.531428   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.533199   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.533212   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.533226   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.533238   80404 addons.go:475] Verifying addon metrics-server=true in "embed-certs-591460"
	I0612 21:43:27.534895   80404 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:43:27.536129   80404 addons.go:510] duration metric: took 2.017228253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:43:28.373835   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.373862   80404 pod_ready.go:81] duration metric: took 1.007715736s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.373870   80404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379042   80404 pod_ready.go:92] pod "etcd-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.379065   80404 pod_ready.go:81] duration metric: took 5.188395ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379078   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384218   80404 pod_ready.go:92] pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.384233   80404 pod_ready.go:81] duration metric: took 5.148944ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384241   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389023   80404 pod_ready.go:92] pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.389046   80404 pod_ready.go:81] duration metric: took 4.78947ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389056   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623880   80404 pod_ready.go:92] pod "kube-proxy-5l2wz" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.623902   80404 pod_ready.go:81] duration metric: took 234.83854ms for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623910   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022477   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:29.022508   80404 pod_ready.go:81] duration metric: took 398.590821ms for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022522   80404 pod_ready.go:38] duration metric: took 3.191712664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:29.022539   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:43:29.022602   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:43:29.038776   80404 api_server.go:72] duration metric: took 3.51993276s to wait for apiserver process to appear ...
	I0612 21:43:29.038805   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:43:29.038827   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:43:29.045455   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:43:29.047050   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:43:29.047072   80404 api_server.go:131] duration metric: took 8.260077ms to wait for apiserver health ...
	I0612 21:43:29.047080   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:43:29.226569   80404 system_pods.go:59] 9 kube-system pods found
	I0612 21:43:29.226603   80404 system_pods.go:61] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.226611   80404 system_pods.go:61] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.226618   80404 system_pods.go:61] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.226624   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.226631   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.226636   80404 system_pods.go:61] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.226642   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.226652   80404 system_pods.go:61] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.226659   80404 system_pods.go:61] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.226671   80404 system_pods.go:74] duration metric: took 179.583899ms to wait for pod list to return data ...
	I0612 21:43:29.226684   80404 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:43:29.422244   80404 default_sa.go:45] found service account: "default"
	I0612 21:43:29.422278   80404 default_sa.go:55] duration metric: took 195.585835ms for default service account to be created ...
	I0612 21:43:29.422290   80404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:43:29.626614   80404 system_pods.go:86] 9 kube-system pods found
	I0612 21:43:29.626650   80404 system_pods.go:89] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.626659   80404 system_pods.go:89] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.626667   80404 system_pods.go:89] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.626673   80404 system_pods.go:89] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.626680   80404 system_pods.go:89] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.626687   80404 system_pods.go:89] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.626693   80404 system_pods.go:89] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.626703   80404 system_pods.go:89] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.626714   80404 system_pods.go:89] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.626725   80404 system_pods.go:126] duration metric: took 204.428087ms to wait for k8s-apps to be running ...
	I0612 21:43:29.626737   80404 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:43:29.626793   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:29.642423   80404 system_svc.go:56] duration metric: took 15.67694ms WaitForService to wait for kubelet
	I0612 21:43:29.642457   80404 kubeadm.go:576] duration metric: took 4.123619864s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:43:29.642481   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:43:29.825804   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:43:29.825833   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:43:29.825846   80404 node_conditions.go:105] duration metric: took 183.359091ms to run NodePressure ...
	I0612 21:43:29.825860   80404 start.go:240] waiting for startup goroutines ...
	I0612 21:43:29.825868   80404 start.go:245] waiting for cluster config update ...
	I0612 21:43:29.825881   80404 start.go:254] writing updated cluster config ...
	I0612 21:43:29.826229   80404 ssh_runner.go:195] Run: rm -f paused
	I0612 21:43:29.878580   80404 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:43:29.880438   80404 out.go:177] * Done! kubectl is now configured to use "embed-certs-591460" cluster and "default" namespace by default
	I0612 21:43:57.924825   80157 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.113719509s)
	I0612 21:43:57.924912   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:57.942507   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:57.953901   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:57.964374   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:57.964396   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:57.964439   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:57.974281   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:57.974366   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:57.985000   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:57.995268   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:57.995346   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:58.005482   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.015598   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:58.015659   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.028582   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:58.038706   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:58.038756   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:58.051818   80157 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:58.110576   80157 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:58.110645   80157 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:58.274454   80157 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:58.274625   80157 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:58.274751   80157 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:58.484837   80157 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:58.486643   80157 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:58.486753   80157 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:58.486845   80157 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:58.486963   80157 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:58.487058   80157 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:58.487192   80157 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:58.487283   80157 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:58.487368   80157 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:58.487452   80157 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:58.487559   80157 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:58.487653   80157 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:58.487728   80157 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:58.487826   80157 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:58.644916   80157 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:58.789369   80157 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:58.924153   80157 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:59.044332   80157 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:59.352910   80157 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:59.353462   80157 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:59.356967   80157 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:59.359470   80157 out.go:204]   - Booting up control plane ...
	I0612 21:43:59.359596   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:59.359687   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:59.359792   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:59.378280   80157 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:59.379149   80157 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:59.379240   80157 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:59.521694   80157 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:59.521775   80157 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:44:00.036696   80157 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 514.972931ms
	I0612 21:44:00.036836   80157 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:44:05.539363   80157 kubeadm.go:309] [api-check] The API server is healthy after 5.502859715s
	I0612 21:44:05.552779   80157 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:44:05.567296   80157 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:44:05.603398   80157 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:44:05.603707   80157 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-087875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:44:05.619311   80157 kubeadm.go:309] [bootstrap-token] Using token: x2knjj.1kuv2wdowwsbztfg
	I0612 21:44:05.621026   80157 out.go:204]   - Configuring RBAC rules ...
	I0612 21:44:05.621180   80157 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:44:05.628474   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:44:05.642438   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:44:05.647606   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:44:05.651982   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:44:05.656129   80157 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:44:05.947680   80157 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:44:06.430716   80157 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:44:06.950446   80157 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:44:06.951688   80157 kubeadm.go:309] 
	I0612 21:44:06.951771   80157 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:44:06.951782   80157 kubeadm.go:309] 
	I0612 21:44:06.951857   80157 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:44:06.951866   80157 kubeadm.go:309] 
	I0612 21:44:06.951919   80157 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:44:06.952007   80157 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:44:06.952083   80157 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:44:06.952094   80157 kubeadm.go:309] 
	I0612 21:44:06.952160   80157 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:44:06.952172   80157 kubeadm.go:309] 
	I0612 21:44:06.952222   80157 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:44:06.952232   80157 kubeadm.go:309] 
	I0612 21:44:06.952285   80157 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:44:06.952375   80157 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:44:06.952460   80157 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:44:06.952476   80157 kubeadm.go:309] 
	I0612 21:44:06.952612   80157 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:44:06.952711   80157 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:44:06.952722   80157 kubeadm.go:309] 
	I0612 21:44:06.952819   80157 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.952933   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:44:06.952963   80157 kubeadm.go:309] 	--control-plane 
	I0612 21:44:06.952985   80157 kubeadm.go:309] 
	I0612 21:44:06.953100   80157 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:44:06.953114   80157 kubeadm.go:309] 
	I0612 21:44:06.953219   80157 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.953373   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:44:06.953943   80157 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:44:06.953986   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:44:06.954003   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:44:06.956587   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:44:06.957989   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:44:06.972666   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:44:07.000720   80157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:44:07.000822   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.000839   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-087875 minikube.k8s.io/updated_at=2024_06_12T21_44_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=no-preload-087875 minikube.k8s.io/primary=true
	I0612 21:44:07.201613   80157 ops.go:34] apiserver oom_adj: -16
	I0612 21:44:07.201713   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.702791   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.201886   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.702020   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.202755   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.702683   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.202007   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.702272   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.201764   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.702383   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.201880   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.702587   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.202524   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.702498   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.202157   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.702197   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.201852   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.702444   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.201919   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.701722   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.202307   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.701823   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.202602   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.702354   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.202207   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.308654   80157 kubeadm.go:1107] duration metric: took 12.307897648s to wait for elevateKubeSystemPrivileges
	W0612 21:44:19.308699   80157 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:44:19.308709   80157 kubeadm.go:393] duration metric: took 5m15.118303799s to StartCluster
	I0612 21:44:19.308738   80157 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.308825   80157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:44:19.311295   80157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.311587   80157 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:44:19.313263   80157 out.go:177] * Verifying Kubernetes components...
	I0612 21:44:19.311693   80157 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:44:19.311780   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:44:19.315137   80157 addons.go:69] Setting storage-provisioner=true in profile "no-preload-087875"
	I0612 21:44:19.315148   80157 addons.go:69] Setting default-storageclass=true in profile "no-preload-087875"
	I0612 21:44:19.315192   80157 addons.go:234] Setting addon storage-provisioner=true in "no-preload-087875"
	I0612 21:44:19.315201   80157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-087875"
	I0612 21:44:19.315202   80157 addons.go:69] Setting metrics-server=true in profile "no-preload-087875"
	I0612 21:44:19.315240   80157 addons.go:234] Setting addon metrics-server=true in "no-preload-087875"
	W0612 21:44:19.315255   80157 addons.go:243] addon metrics-server should already be in state true
	I0612 21:44:19.315296   80157 host.go:66] Checking if "no-preload-087875" exists ...
	W0612 21:44:19.315209   80157 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:44:19.315397   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.315139   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:44:19.315636   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315666   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315653   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315698   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315731   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315750   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.331461   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40419
	I0612 21:44:19.331495   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0612 21:44:19.331924   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332019   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332446   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332466   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332580   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332603   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332866   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.332911   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.333087   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.333484   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.333508   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.334462   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0612 21:44:19.334922   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.335447   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.335474   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.335812   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.336376   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.336408   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.336657   80157 addons.go:234] Setting addon default-storageclass=true in "no-preload-087875"
	W0612 21:44:19.336675   80157 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:44:19.336701   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.337047   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.337078   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.350724   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0612 21:44:19.351308   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.351869   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.351897   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.352272   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.352503   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.354434   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0612 21:44:19.354532   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.356594   80157 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:44:19.354927   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.355284   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37489
	I0612 21:44:19.357181   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.358026   80157 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.357219   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358040   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:44:19.358048   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.358058   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.358407   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.358560   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358577   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.359024   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.359035   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.359069   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.359408   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.361013   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.361524   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.363337   80157 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:44:19.361921   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.362312   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.364713   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:44:19.364727   80157 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:44:19.364736   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.364744   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.365021   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.365260   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.365419   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.368572   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.368971   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.368988   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.369144   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.369316   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.369431   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.369538   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.377220   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0612 21:44:19.377598   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.378595   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.378621   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.378931   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.379127   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.380646   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.380844   80157 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.380857   80157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:44:19.380869   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.383763   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384201   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.384216   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384504   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.384660   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.384816   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.384956   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.516231   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:44:19.539205   80157 node_ready.go:35] waiting up to 6m0s for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546948   80157 node_ready.go:49] node "no-preload-087875" has status "Ready":"True"
	I0612 21:44:19.546972   80157 node_ready.go:38] duration metric: took 7.739123ms for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546985   80157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:19.553454   80157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562831   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.562854   80157 pod_ready.go:81] duration metric: took 9.377758ms for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562862   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568274   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.568296   80157 pod_ready.go:81] duration metric: took 5.425162ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568306   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.572960   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.572991   80157 pod_ready.go:81] duration metric: took 4.669828ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.573002   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.620522   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:44:19.620548   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:44:19.654325   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.681762   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:44:19.681800   80157 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:44:19.699701   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.774496   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:19.774526   80157 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:44:19.874891   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:20.590260   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590292   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590276   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590360   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590587   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.590634   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.590644   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.590651   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590658   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592402   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592462   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592410   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592411   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592414   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592551   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592476   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.592655   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592952   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.593069   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.593093   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.634339   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.634370   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.634813   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.634864   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.634880   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.321337   80157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.446394551s)
	I0612 21:44:21.321389   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.321403   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.321802   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:21.321827   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.321968   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322012   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.322023   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.322278   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.322294   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322305   80157 addons.go:475] Verifying addon metrics-server=true in "no-preload-087875"
	I0612 21:44:21.324652   80157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:44:21.326653   80157 addons.go:510] duration metric: took 2.01495884s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:44:21.589251   80157 pod_ready.go:92] pod "kube-proxy-lnhzt" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.589290   80157 pod_ready.go:81] duration metric: took 2.016278458s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.589305   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652083   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.652122   80157 pod_ready.go:81] duration metric: took 62.805318ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652136   80157 pod_ready.go:38] duration metric: took 2.105136343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:21.652156   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:44:21.652237   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:44:21.683110   80157 api_server.go:72] duration metric: took 2.371482611s to wait for apiserver process to appear ...
	I0612 21:44:21.683148   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:44:21.683187   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:44:21.704637   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:44:21.714032   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:44:21.714061   80157 api_server.go:131] duration metric: took 30.904631ms to wait for apiserver health ...
	I0612 21:44:21.714070   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:44:21.751484   80157 system_pods.go:59] 9 kube-system pods found
	I0612 21:44:21.751520   80157 system_pods.go:61] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:21.751526   80157 system_pods.go:61] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:21.751532   80157 system_pods.go:61] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:21.751537   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:21.751544   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:21.751548   80157 system_pods.go:61] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:21.751552   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:21.751560   80157 system_pods.go:61] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:21.751568   80157 system_pods.go:61] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:21.751581   80157 system_pods.go:74] duration metric: took 37.503399ms to wait for pod list to return data ...
	I0612 21:44:21.751595   80157 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:44:21.943440   80157 default_sa.go:45] found service account: "default"
	I0612 21:44:21.943465   80157 default_sa.go:55] duration metric: took 191.863221ms for default service account to be created ...
	I0612 21:44:21.943473   80157 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:44:22.146922   80157 system_pods.go:86] 9 kube-system pods found
	I0612 21:44:22.146960   80157 system_pods.go:89] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:22.146969   80157 system_pods.go:89] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:22.146975   80157 system_pods.go:89] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:22.146982   80157 system_pods.go:89] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:22.146988   80157 system_pods.go:89] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:22.146994   80157 system_pods.go:89] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:22.147000   80157 system_pods.go:89] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:22.147012   80157 system_pods.go:89] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:22.147030   80157 system_pods.go:89] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:22.147042   80157 system_pods.go:126] duration metric: took 203.562938ms to wait for k8s-apps to be running ...
	I0612 21:44:22.147056   80157 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:44:22.147110   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:22.167568   80157 system_svc.go:56] duration metric: took 20.500218ms WaitForService to wait for kubelet
	I0612 21:44:22.167606   80157 kubeadm.go:576] duration metric: took 2.855984791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:44:22.167627   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:44:22.343015   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:44:22.343039   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:44:22.343051   80157 node_conditions.go:105] duration metric: took 175.419211ms to run NodePressure ...
	I0612 21:44:22.343064   80157 start.go:240] waiting for startup goroutines ...
	I0612 21:44:22.343073   80157 start.go:245] waiting for cluster config update ...
	I0612 21:44:22.343085   80157 start.go:254] writing updated cluster config ...
	I0612 21:44:22.343387   80157 ssh_runner.go:195] Run: rm -f paused
	I0612 21:44:22.391092   80157 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:44:22.393268   80157 out.go:177] * Done! kubectl is now configured to use "no-preload-087875" cluster and "default" namespace by default
	I0612 21:44:37.700712   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:44:37.700862   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:44:37.702455   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:44:37.702552   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:44:37.702639   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:44:37.702749   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:44:37.702887   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:44:37.702992   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:44:37.704955   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:44:37.705032   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:44:37.705088   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:44:37.705159   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:44:37.705228   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:44:37.705289   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:44:37.705368   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:44:37.705467   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:44:37.705538   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:44:37.705620   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:44:37.705683   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:44:37.705723   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:44:37.705773   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:44:37.705816   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:44:37.705861   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:44:37.705917   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:44:37.705964   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:44:37.706062   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:44:37.706172   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:44:37.706231   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:44:37.706288   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:44:37.707753   80762 out.go:204]   - Booting up control plane ...
	I0612 21:44:37.707857   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:44:37.707931   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:44:37.707994   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:44:37.708064   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:44:37.708197   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:44:37.708251   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:44:37.708344   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708536   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708600   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708770   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708864   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709067   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709133   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709340   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709441   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709638   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709650   80762 kubeadm.go:309] 
	I0612 21:44:37.709683   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:44:37.709721   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:44:37.709728   80762 kubeadm.go:309] 
	I0612 21:44:37.709777   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:44:37.709817   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:44:37.709910   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:44:37.709917   80762 kubeadm.go:309] 
	I0612 21:44:37.710018   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:44:37.710052   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:44:37.710083   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:44:37.710089   80762 kubeadm.go:309] 
	I0612 21:44:37.710184   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:44:37.710259   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:44:37.710265   80762 kubeadm.go:309] 
	I0612 21:44:37.710359   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:44:37.710431   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:44:37.710497   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:44:37.710563   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:44:37.710607   80762 kubeadm.go:309] 
	W0612 21:44:37.710666   80762 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0612 21:44:37.710709   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:44:38.170461   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:38.186842   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:44:38.198380   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:44:38.198400   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:44:38.198454   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:44:38.208876   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:44:38.208948   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:44:38.219641   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:44:38.229622   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:44:38.229685   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:44:38.240153   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.251342   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:44:38.251401   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.262662   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:44:38.272898   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:44:38.272954   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:44:38.283213   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:44:38.501637   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:46:34.582636   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:46:34.582745   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:46:34.584702   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:46:34.584775   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:46:34.584898   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:46:34.585029   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:46:34.585172   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:46:34.585263   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:46:34.587030   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:46:34.587101   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:46:34.587160   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:46:34.587260   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:46:34.587349   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:46:34.587446   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:46:34.587521   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:46:34.587609   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:46:34.587697   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:46:34.587803   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:46:34.587886   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:46:34.588014   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:46:34.588097   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:46:34.588177   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:46:34.588268   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:46:34.588381   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:46:34.588447   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:46:34.588558   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:46:34.588659   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:46:34.588719   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:46:34.588816   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:46:34.590114   80762 out.go:204]   - Booting up control plane ...
	I0612 21:46:34.590226   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:46:34.590326   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:46:34.590444   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:46:34.590527   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:46:34.590710   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:46:34.590778   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:46:34.590847   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591054   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591149   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591411   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591508   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591743   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591846   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592108   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592205   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592395   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592403   80762 kubeadm.go:309] 
	I0612 21:46:34.592436   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:46:34.592485   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:46:34.592500   80762 kubeadm.go:309] 
	I0612 21:46:34.592535   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:46:34.592563   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:46:34.592677   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:46:34.592688   80762 kubeadm.go:309] 
	I0612 21:46:34.592820   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:46:34.592855   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:46:34.592883   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:46:34.592890   80762 kubeadm.go:309] 
	I0612 21:46:34.593007   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:46:34.593107   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:46:34.593116   80762 kubeadm.go:309] 
	I0612 21:46:34.593224   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:46:34.593342   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:46:34.593426   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:46:34.593494   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:46:34.593552   80762 kubeadm.go:393] duration metric: took 8m2.356271864s to StartCluster
	I0612 21:46:34.593558   80762 kubeadm.go:309] 
	I0612 21:46:34.593589   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:46:34.593639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:46:34.643842   80762 cri.go:89] found id: ""
	I0612 21:46:34.643876   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.643887   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:46:34.643905   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:46:34.643982   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:46:34.682878   80762 cri.go:89] found id: ""
	I0612 21:46:34.682899   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.682906   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:46:34.682912   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:46:34.682961   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:46:34.721931   80762 cri.go:89] found id: ""
	I0612 21:46:34.721955   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.721964   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:46:34.721969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:46:34.722021   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:46:34.759233   80762 cri.go:89] found id: ""
	I0612 21:46:34.759266   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.759274   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:46:34.759280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:46:34.759333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:46:34.800142   80762 cri.go:89] found id: ""
	I0612 21:46:34.800176   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.800186   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:46:34.800194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:46:34.800256   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:46:34.836746   80762 cri.go:89] found id: ""
	I0612 21:46:34.836774   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.836784   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:46:34.836791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:46:34.836850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:46:34.876108   80762 cri.go:89] found id: ""
	I0612 21:46:34.876138   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.876147   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:46:34.876153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:46:34.876202   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:46:34.912272   80762 cri.go:89] found id: ""
	I0612 21:46:34.912294   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.912301   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:46:34.912310   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:46:34.912324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:46:34.997300   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:46:34.997331   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:46:34.997347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:46:35.105602   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:46:35.105638   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:46:35.152818   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:46:35.152857   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:46:35.216504   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:46:35.216545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0612 21:46:35.239531   80762 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0612 21:46:35.239581   80762 out.go:239] * 
	W0612 21:46:35.239646   80762 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.239672   80762 out.go:239] * 
	W0612 21:46:35.240600   80762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:46:35.244822   80762 out.go:177] 
	W0612 21:46:35.246072   80762 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.246137   80762 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0612 21:46:35.246164   80762 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0612 21:46:35.247768   80762 out.go:177] 
	
	
	==> CRI-O <==
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.006284757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fcfd118-edde-4e19-a388-c4fd5244d693 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.007580359Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb48f817-7de4-4b8f-8106-80d31acfcba6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.007993025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229152007962418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb48f817-7de4-4b8f-8106-80d31acfcba6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.008766259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c1e722a-9637-4088-9740-bee5b6018ff0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.008848861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c1e722a-9637-4088-9740-bee5b6018ff0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.009313727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:456a26e2007c446f05111c29fe257ea55ac9aa4f64390753d7b2ad2aec08420d,PodSandboxId:51de2435b4801fd17d8563f20a98cfd2a187bebf18ad47126402320d254108ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228607686254023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade8816b-866c-4ba3-9665-fc9b144a4286,},Annotations:map[string]string{io.kubernetes.container.hash: 79c17914,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c1641b1f476cfc4f601ec822ff80a9ee8d47cbd60803d9784e1157a907eced,PodSandboxId:3d0f6c409fe1639f34a5852b3f713811cd2a80aafafc80a7afa602a566572d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606844016573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hs7zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8af54bf-17f9-48fe-a770-536c2313bc2a,},Annotations:map[string]string{io.kubernetes.container.hash: b78e6ca9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65c73b186f91091b6b9b4656b546bb3ff54b286a42b23fab99f42b63883d8a3,PodSandboxId:5ca8e42ce9f1f7de993bc78c154a76b39b4926d28b57146f76364daae3fba858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606805588222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
091154b-ef24-4447-b294-03f8d704f37e,},Annotations:map[string]string{io.kubernetes.container.hash: 695657f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafd4118008a016d83fc26ea50f48bb5d65c039c327915423d0a8cd6174e7b9d,PodSandboxId:b211b1234593f06e6206780c967aaf7ac1475d89f3c90f3eef21ff976773aa83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1718228605813606699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l2wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7130c7fb-880b-4a7b-937d-3980c89f217a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ae272a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62652ad7fd20de25e0a440d88237903a2caca55e4e6cfb9eef90f37c716f570b,PodSandboxId:93b31e3e61769df84a73c6ea711ac7b2f265e7808c094481714eacd2190790c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228586377745669,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c528760c1e80f88f75f1e56fecfde584,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7540034e3415b4d9c1685ae0c3b09dc9bfe04a575479cc0eecc567c65c7cce63,PodSandboxId:7d89165e4cb4bca757660b51054d385524f05defa9b920eb6f886fe977078cf9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228586338263388,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83842ac2c4e16e54dde29e303b007929,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:593d39406c63dfef59715265b9658b4b5da66db8584212f23f78bc23f71392a4,PodSandboxId:e701b6df8ab855eaf2e8a20cbf391e93e05fccec112372f23a541d539fe489fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228586338890719,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55dc000dfac3800d39b646c5c11a82c0,},Annotations:map[string]string{io.kubernetes.container.hash: f3eb41bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39dfe322d79671c6df88f6d4c81ccfeb1ea56add7bd86768184df7534f5e86ab,PodSandboxId:5a4a70963c40c75415f5b3dd839d13e4b4ec57d824b48a86d03781919573ccb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228586304584977,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da294bdd0b2d30db40f5d7fa6ca9a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 36ebbbc0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c1e722a-9637-4088-9740-bee5b6018ff0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.048112748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c319a938-69fa-4387-a0f3-9204778c9e56 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.048213208Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c319a938-69fa-4387-a0f3-9204778c9e56 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.049429326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=383c9f36-5f04-4444-9630-1c368eeda18f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.049796057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229152049775848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=383c9f36-5f04-4444-9630-1c368eeda18f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.050412794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfbe533e-a7b0-4b26-9040-91ff7aafd579 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.050462446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfbe533e-a7b0-4b26-9040-91ff7aafd579 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.050647475Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:456a26e2007c446f05111c29fe257ea55ac9aa4f64390753d7b2ad2aec08420d,PodSandboxId:51de2435b4801fd17d8563f20a98cfd2a187bebf18ad47126402320d254108ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228607686254023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade8816b-866c-4ba3-9665-fc9b144a4286,},Annotations:map[string]string{io.kubernetes.container.hash: 79c17914,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c1641b1f476cfc4f601ec822ff80a9ee8d47cbd60803d9784e1157a907eced,PodSandboxId:3d0f6c409fe1639f34a5852b3f713811cd2a80aafafc80a7afa602a566572d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606844016573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hs7zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8af54bf-17f9-48fe-a770-536c2313bc2a,},Annotations:map[string]string{io.kubernetes.container.hash: b78e6ca9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65c73b186f91091b6b9b4656b546bb3ff54b286a42b23fab99f42b63883d8a3,PodSandboxId:5ca8e42ce9f1f7de993bc78c154a76b39b4926d28b57146f76364daae3fba858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606805588222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
091154b-ef24-4447-b294-03f8d704f37e,},Annotations:map[string]string{io.kubernetes.container.hash: 695657f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafd4118008a016d83fc26ea50f48bb5d65c039c327915423d0a8cd6174e7b9d,PodSandboxId:b211b1234593f06e6206780c967aaf7ac1475d89f3c90f3eef21ff976773aa83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1718228605813606699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l2wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7130c7fb-880b-4a7b-937d-3980c89f217a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ae272a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62652ad7fd20de25e0a440d88237903a2caca55e4e6cfb9eef90f37c716f570b,PodSandboxId:93b31e3e61769df84a73c6ea711ac7b2f265e7808c094481714eacd2190790c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228586377745669,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c528760c1e80f88f75f1e56fecfde584,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7540034e3415b4d9c1685ae0c3b09dc9bfe04a575479cc0eecc567c65c7cce63,PodSandboxId:7d89165e4cb4bca757660b51054d385524f05defa9b920eb6f886fe977078cf9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228586338263388,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83842ac2c4e16e54dde29e303b007929,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:593d39406c63dfef59715265b9658b4b5da66db8584212f23f78bc23f71392a4,PodSandboxId:e701b6df8ab855eaf2e8a20cbf391e93e05fccec112372f23a541d539fe489fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228586338890719,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55dc000dfac3800d39b646c5c11a82c0,},Annotations:map[string]string{io.kubernetes.container.hash: f3eb41bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39dfe322d79671c6df88f6d4c81ccfeb1ea56add7bd86768184df7534f5e86ab,PodSandboxId:5a4a70963c40c75415f5b3dd839d13e4b4ec57d824b48a86d03781919573ccb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228586304584977,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da294bdd0b2d30db40f5d7fa6ca9a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 36ebbbc0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cfbe533e-a7b0-4b26-9040-91ff7aafd579 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.053713661Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7af8681f-73e7-4b5b-b03a-36fa2055d6de name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.053973089Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a0abb388194270c331d44fc78ddeec2355e8bcd0cd3bd7be7cb410fd4dfedf06,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-r7fbt,Uid:e33a1ff8-3032-4be5-8b6a-3eedfbb92611,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718228607683213976,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-r7fbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e33a1ff8-3032-4be5-8b6a-3eedfbb92611,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T21:43:27.370977220Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:51de2435b4801fd17d8563f20a98cfd2a187bebf18ad47126402320d254108ff,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ade8816b-866c-4ba3-9665-fc9b144a4286,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718228607573179100,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade8816b-866c-4ba3-9665-fc9b144a4286,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-12T21:43:27.265385350Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3d0f6c409fe1639f34a5852b3f713811cd2a80aafafc80a7afa602a566572d6a,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-hs7zn,Uid:d8af54bf-17f9-48fe-a770-536c2313bc2a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718228605924534982,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-hs7zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8af54bf-17f9-48fe-a770-536c2313bc2a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T21:43:25.596817664Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5ca8e42ce9f1f7de993bc78c154a76b39b4926d28b57146f76364daae3fba858,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fpf5q,Uid:1091154b-ef24-4447
-b294-03f8d704f37e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718228605830628231,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091154b-ef24-4447-b294-03f8d704f37e,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T21:43:25.511132765Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b211b1234593f06e6206780c967aaf7ac1475d89f3c90f3eef21ff976773aa83,Metadata:&PodSandboxMetadata{Name:kube-proxy-5l2wz,Uid:7130c7fb-880b-4a7b-937d-3980c89f217a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718228605588832665,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5l2wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7130c7fb-880b-4a7b-937d-3980c89f217a,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-12T21:43:25.263561535Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a4a70963c40c75415f5b3dd839d13e4b4ec57d824b48a86d03781919573ccb3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-591460,Uid:7da294bdd0b2d30db40f5d7fa6ca9a0f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718228586107466760,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da294bdd0b2d30db40f5d7fa6ca9a0f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.147:8443,kubernetes.io/config.hash: 7da294bdd0b2d30db40f5d7fa6ca9a0f,kubernetes.io/config.seen: 2024-06-12T21:43:05.654275807Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:93b31e3e61769df84a73c6ea711a
c7b2f265e7808c094481714eacd2190790c9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-591460,Uid:c528760c1e80f88f75f1e56fecfde584,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718228586101265345,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c528760c1e80f88f75f1e56fecfde584,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c528760c1e80f88f75f1e56fecfde584,kubernetes.io/config.seen: 2024-06-12T21:43:05.654277877Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7d89165e4cb4bca757660b51054d385524f05defa9b920eb6f886fe977078cf9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-591460,Uid:83842ac2c4e16e54dde29e303b007929,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718228586100149581,Labels:map[string]string{component: kube-controller-mana
ger,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83842ac2c4e16e54dde29e303b007929,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 83842ac2c4e16e54dde29e303b007929,kubernetes.io/config.seen: 2024-06-12T21:43:05.654277087Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e701b6df8ab855eaf2e8a20cbf391e93e05fccec112372f23a541d539fe489fb,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-591460,Uid:55dc000dfac3800d39b646c5c11a82c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718228586099586111,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55dc000dfac3800d39b646c5c11a82c0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.3
9.147:2379,kubernetes.io/config.hash: 55dc000dfac3800d39b646c5c11a82c0,kubernetes.io/config.seen: 2024-06-12T21:43:05.654271617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7af8681f-73e7-4b5b-b03a-36fa2055d6de name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.057367923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a935a463-0edc-425f-9a58-d011de217a25 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.057418565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a935a463-0edc-425f-9a58-d011de217a25 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.057604862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:456a26e2007c446f05111c29fe257ea55ac9aa4f64390753d7b2ad2aec08420d,PodSandboxId:51de2435b4801fd17d8563f20a98cfd2a187bebf18ad47126402320d254108ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228607686254023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade8816b-866c-4ba3-9665-fc9b144a4286,},Annotations:map[string]string{io.kubernetes.container.hash: 79c17914,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c1641b1f476cfc4f601ec822ff80a9ee8d47cbd60803d9784e1157a907eced,PodSandboxId:3d0f6c409fe1639f34a5852b3f713811cd2a80aafafc80a7afa602a566572d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606844016573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hs7zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8af54bf-17f9-48fe-a770-536c2313bc2a,},Annotations:map[string]string{io.kubernetes.container.hash: b78e6ca9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65c73b186f91091b6b9b4656b546bb3ff54b286a42b23fab99f42b63883d8a3,PodSandboxId:5ca8e42ce9f1f7de993bc78c154a76b39b4926d28b57146f76364daae3fba858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606805588222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
091154b-ef24-4447-b294-03f8d704f37e,},Annotations:map[string]string{io.kubernetes.container.hash: 695657f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafd4118008a016d83fc26ea50f48bb5d65c039c327915423d0a8cd6174e7b9d,PodSandboxId:b211b1234593f06e6206780c967aaf7ac1475d89f3c90f3eef21ff976773aa83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1718228605813606699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l2wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7130c7fb-880b-4a7b-937d-3980c89f217a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ae272a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62652ad7fd20de25e0a440d88237903a2caca55e4e6cfb9eef90f37c716f570b,PodSandboxId:93b31e3e61769df84a73c6ea711ac7b2f265e7808c094481714eacd2190790c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228586377745669,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c528760c1e80f88f75f1e56fecfde584,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7540034e3415b4d9c1685ae0c3b09dc9bfe04a575479cc0eecc567c65c7cce63,PodSandboxId:7d89165e4cb4bca757660b51054d385524f05defa9b920eb6f886fe977078cf9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228586338263388,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83842ac2c4e16e54dde29e303b007929,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:593d39406c63dfef59715265b9658b4b5da66db8584212f23f78bc23f71392a4,PodSandboxId:e701b6df8ab855eaf2e8a20cbf391e93e05fccec112372f23a541d539fe489fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228586338890719,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55dc000dfac3800d39b646c5c11a82c0,},Annotations:map[string]string{io.kubernetes.container.hash: f3eb41bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39dfe322d79671c6df88f6d4c81ccfeb1ea56add7bd86768184df7534f5e86ab,PodSandboxId:5a4a70963c40c75415f5b3dd839d13e4b4ec57d824b48a86d03781919573ccb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228586304584977,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da294bdd0b2d30db40f5d7fa6ca9a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 36ebbbc0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a935a463-0edc-425f-9a58-d011de217a25 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.099347652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53cc3d17-f427-4513-98c0-af74ac4d5387 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.099439321Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53cc3d17-f427-4513-98c0-af74ac4d5387 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.100741218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a8e8d15-1704-476e-a191-c5953d8a9cda name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.101380489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229152101355237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a8e8d15-1704-476e-a191-c5953d8a9cda name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.102007577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2a3e6b2-b972-4ba7-810b-e6da948b6514 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.102108335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2a3e6b2-b972-4ba7-810b-e6da948b6514 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:52:32 embed-certs-591460 crio[726]: time="2024-06-12 21:52:32.102282272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:456a26e2007c446f05111c29fe257ea55ac9aa4f64390753d7b2ad2aec08420d,PodSandboxId:51de2435b4801fd17d8563f20a98cfd2a187bebf18ad47126402320d254108ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228607686254023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade8816b-866c-4ba3-9665-fc9b144a4286,},Annotations:map[string]string{io.kubernetes.container.hash: 79c17914,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c1641b1f476cfc4f601ec822ff80a9ee8d47cbd60803d9784e1157a907eced,PodSandboxId:3d0f6c409fe1639f34a5852b3f713811cd2a80aafafc80a7afa602a566572d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606844016573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hs7zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8af54bf-17f9-48fe-a770-536c2313bc2a,},Annotations:map[string]string{io.kubernetes.container.hash: b78e6ca9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65c73b186f91091b6b9b4656b546bb3ff54b286a42b23fab99f42b63883d8a3,PodSandboxId:5ca8e42ce9f1f7de993bc78c154a76b39b4926d28b57146f76364daae3fba858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606805588222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
091154b-ef24-4447-b294-03f8d704f37e,},Annotations:map[string]string{io.kubernetes.container.hash: 695657f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafd4118008a016d83fc26ea50f48bb5d65c039c327915423d0a8cd6174e7b9d,PodSandboxId:b211b1234593f06e6206780c967aaf7ac1475d89f3c90f3eef21ff976773aa83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1718228605813606699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l2wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7130c7fb-880b-4a7b-937d-3980c89f217a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ae272a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62652ad7fd20de25e0a440d88237903a2caca55e4e6cfb9eef90f37c716f570b,PodSandboxId:93b31e3e61769df84a73c6ea711ac7b2f265e7808c094481714eacd2190790c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228586377745669,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c528760c1e80f88f75f1e56fecfde584,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7540034e3415b4d9c1685ae0c3b09dc9bfe04a575479cc0eecc567c65c7cce63,PodSandboxId:7d89165e4cb4bca757660b51054d385524f05defa9b920eb6f886fe977078cf9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228586338263388,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83842ac2c4e16e54dde29e303b007929,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:593d39406c63dfef59715265b9658b4b5da66db8584212f23f78bc23f71392a4,PodSandboxId:e701b6df8ab855eaf2e8a20cbf391e93e05fccec112372f23a541d539fe489fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228586338890719,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55dc000dfac3800d39b646c5c11a82c0,},Annotations:map[string]string{io.kubernetes.container.hash: f3eb41bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39dfe322d79671c6df88f6d4c81ccfeb1ea56add7bd86768184df7534f5e86ab,PodSandboxId:5a4a70963c40c75415f5b3dd839d13e4b4ec57d824b48a86d03781919573ccb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228586304584977,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da294bdd0b2d30db40f5d7fa6ca9a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 36ebbbc0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2a3e6b2-b972-4ba7-810b-e6da948b6514 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	456a26e2007c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   51de2435b4801       storage-provisioner
	77c1641b1f476       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3d0f6c409fe16       coredns-7db6d8ff4d-hs7zn
	f65c73b186f91       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   5ca8e42ce9f1f       coredns-7db6d8ff4d-fpf5q
	cafd4118008a0       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                0                   b211b1234593f       kube-proxy-5l2wz
	62652ad7fd20d       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   93b31e3e61769       kube-scheduler-embed-certs-591460
	593d39406c63d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   e701b6df8ab85       etcd-embed-certs-591460
	7540034e3415b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   9 minutes ago       Running             kube-controller-manager   2                   7d89165e4cb4b       kube-controller-manager-embed-certs-591460
	39dfe322d7967       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   9 minutes ago       Running             kube-apiserver            2                   5a4a70963c40c       kube-apiserver-embed-certs-591460
	
	
	==> coredns [77c1641b1f476cfc4f601ec822ff80a9ee8d47cbd60803d9784e1157a907eced] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f65c73b186f91091b6b9b4656b546bb3ff54b286a42b23fab99f42b63883d8a3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-591460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-591460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=embed-certs-591460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T21_43_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:43:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-591460
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:52:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:48:38 +0000   Wed, 12 Jun 2024 21:43:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:48:38 +0000   Wed, 12 Jun 2024 21:43:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:48:38 +0000   Wed, 12 Jun 2024 21:43:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:48:38 +0000   Wed, 12 Jun 2024 21:43:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    embed-certs-591460
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 be2a1b8c15954fe4a88099a11e94a7f9
	  System UUID:                be2a1b8c-1595-4fe4-a880-99a11e94a7f9
	  Boot ID:                    1230b539-0b4f-433c-aa97-d3b198afe346
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fpf5q                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-hs7zn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-591460                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-embed-certs-591460             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-591460    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-proxy-5l2wz                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-591460             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-569cc877fc-r7fbt               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node embed-certs-591460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node embed-certs-591460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node embed-certs-591460 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node embed-certs-591460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node embed-certs-591460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node embed-certs-591460 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node embed-certs-591460 event: Registered Node embed-certs-591460 in Controller
	
	
	==> dmesg <==
	[  +0.052436] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042214] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.641743] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.449793] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.631824] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun12 21:38] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.058959] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059049] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.211065] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.140132] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.318457] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.609269] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.067804] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.109318] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +4.646862] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.754819] kauditd_printk_skb: 79 callbacks suppressed
	[Jun12 21:43] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.467052] systemd-fstab-generator[3574]: Ignoring "noauto" option for root device
	[  +4.541687] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.009944] systemd-fstab-generator[3893]: Ignoring "noauto" option for root device
	[ +13.880944] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[  +0.107272] kauditd_printk_skb: 14 callbacks suppressed
	[Jun12 21:44] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [593d39406c63dfef59715265b9658b4b5da66db8584212f23f78bc23f71392a4] <==
	{"level":"info","ts":"2024-06-12T21:43:06.723446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d switched to configuration voters=(13949038865233640061)"}
	{"level":"info","ts":"2024-06-12T21:43:06.723559Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"582b8c8375119e1d","local-member-id":"c194f0f1585e7a7d","added-peer-id":"c194f0f1585e7a7d","added-peer-peer-urls":["https://192.168.39.147:2380"]}
	{"level":"info","ts":"2024-06-12T21:43:06.753544Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T21:43:06.757462Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.147:2380"}
	{"level":"info","ts":"2024-06-12T21:43:06.757585Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.147:2380"}
	{"level":"info","ts":"2024-06-12T21:43:06.757668Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c194f0f1585e7a7d","initial-advertise-peer-urls":["https://192.168.39.147:2380"],"listen-peer-urls":["https://192.168.39.147:2380"],"advertise-client-urls":["https://192.168.39.147:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.147:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-12T21:43:06.757944Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-12T21:43:07.656928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-12T21:43:07.657026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-12T21:43:07.657134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d received MsgPreVoteResp from c194f0f1585e7a7d at term 1"}
	{"level":"info","ts":"2024-06-12T21:43:07.657165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became candidate at term 2"}
	{"level":"info","ts":"2024-06-12T21:43:07.657188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d received MsgVoteResp from c194f0f1585e7a7d at term 2"}
	{"level":"info","ts":"2024-06-12T21:43:07.657215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became leader at term 2"}
	{"level":"info","ts":"2024-06-12T21:43:07.65724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c194f0f1585e7a7d elected leader c194f0f1585e7a7d at term 2"}
	{"level":"info","ts":"2024-06-12T21:43:07.661292Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:43:07.662185Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c194f0f1585e7a7d","local-member-attributes":"{Name:embed-certs-591460 ClientURLs:[https://192.168.39.147:2379]}","request-path":"/0/members/c194f0f1585e7a7d/attributes","cluster-id":"582b8c8375119e1d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-12T21:43:07.66231Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:43:07.662363Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:43:07.668593Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-12T21:43:07.670347Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.147:2379"}
	{"level":"info","ts":"2024-06-12T21:43:07.675076Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T21:43:07.702095Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-12T21:43:07.687453Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"582b8c8375119e1d","local-member-id":"c194f0f1585e7a7d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:43:07.702227Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:43:07.702278Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 21:52:32 up 14 min,  0 users,  load average: 0.13, 0.12, 0.09
	Linux embed-certs-591460 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [39dfe322d79671c6df88f6d4c81ccfeb1ea56add7bd86768184df7534f5e86ab] <==
	I0612 21:46:28.182498       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:48:09.112421       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:48:09.112539       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0612 21:48:10.113465       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:48:10.113529       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:48:10.113539       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:48:10.113582       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:48:10.113628       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:48:10.114802       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:49:10.114574       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:49:10.114748       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:49:10.114785       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:49:10.115965       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:49:10.116143       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:49:10.116178       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:51:10.116125       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:51:10.116264       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:51:10.116272       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:51:10.116405       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:51:10.116541       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:51:10.118400       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7540034e3415b4d9c1685ae0c3b09dc9bfe04a575479cc0eecc567c65c7cce63] <==
	I0612 21:46:54.960869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:47:24.533851       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:47:24.974830       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:47:54.539488       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:47:54.984261       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:48:24.545654       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:48:24.995976       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:48:54.550857       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:48:55.007438       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0612 21:49:16.010431       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="209.41µs"
	E0612 21:49:24.556565       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:49:25.023715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0612 21:49:28.012284       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="79.513µs"
	E0612 21:49:54.561962       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:49:55.032273       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:50:24.568846       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:50:25.039877       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:50:54.572886       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:50:55.050413       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:51:24.578286       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:51:25.066694       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:51:54.583733       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:51:55.074898       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:52:24.592259       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:52:25.083618       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [cafd4118008a016d83fc26ea50f48bb5d65c039c327915423d0a8cd6174e7b9d] <==
	I0612 21:43:26.187865       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:43:26.220739       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.147"]
	I0612 21:43:26.297603       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:43:26.297650       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:43:26.297671       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:43:26.302762       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:43:26.302932       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:43:26.302963       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:43:26.306612       1 config.go:192] "Starting service config controller"
	I0612 21:43:26.306628       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:43:26.306647       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:43:26.306651       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:43:26.306966       1 config.go:319] "Starting node config controller"
	I0612 21:43:26.306972       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:43:26.407258       1 shared_informer.go:320] Caches are synced for node config
	I0612 21:43:26.407287       1 shared_informer.go:320] Caches are synced for service config
	I0612 21:43:26.407340       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [62652ad7fd20de25e0a440d88237903a2caca55e4e6cfb9eef90f37c716f570b] <==
	W0612 21:43:09.168134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 21:43:09.171223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 21:43:09.168189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 21:43:09.171361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 21:43:09.171535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 21:43:09.171655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 21:43:10.003655       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0612 21:43:10.003771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0612 21:43:10.016281       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 21:43:10.016408       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 21:43:10.061076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 21:43:10.061163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 21:43:10.073551       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 21:43:10.073691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0612 21:43:10.176353       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 21:43:10.176642       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 21:43:10.226734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 21:43:10.227184       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 21:43:10.257174       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0612 21:43:10.257541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0612 21:43:10.367564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0612 21:43:10.367898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0612 21:43:10.401530       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0612 21:43:10.401665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 21:43:11.842096       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:50:12 embed-certs-591460 kubelet[3900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:50:12 embed-certs-591460 kubelet[3900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:50:12 embed-certs-591460 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:50:12 embed-certs-591460 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:50:24 embed-certs-591460 kubelet[3900]: E0612 21:50:24.990448    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:50:38 embed-certs-591460 kubelet[3900]: E0612 21:50:38.990946    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:50:51 embed-certs-591460 kubelet[3900]: E0612 21:50:51.992296    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:51:02 embed-certs-591460 kubelet[3900]: E0612 21:51:02.990792    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:51:12 embed-certs-591460 kubelet[3900]: E0612 21:51:12.006149    3900 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:51:12 embed-certs-591460 kubelet[3900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:51:12 embed-certs-591460 kubelet[3900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:51:12 embed-certs-591460 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:51:12 embed-certs-591460 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:51:13 embed-certs-591460 kubelet[3900]: E0612 21:51:13.992527    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:51:24 embed-certs-591460 kubelet[3900]: E0612 21:51:24.991892    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:51:38 embed-certs-591460 kubelet[3900]: E0612 21:51:38.991445    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:51:53 embed-certs-591460 kubelet[3900]: E0612 21:51:53.991893    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:52:04 embed-certs-591460 kubelet[3900]: E0612 21:52:04.989948    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:52:12 embed-certs-591460 kubelet[3900]: E0612 21:52:12.007011    3900 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:52:12 embed-certs-591460 kubelet[3900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:52:12 embed-certs-591460 kubelet[3900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:52:12 embed-certs-591460 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:52:12 embed-certs-591460 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:52:16 embed-certs-591460 kubelet[3900]: E0612 21:52:16.990784    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:52:30 embed-certs-591460 kubelet[3900]: E0612 21:52:30.991633    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	
	
	==> storage-provisioner [456a26e2007c446f05111c29fe257ea55ac9aa4f64390753d7b2ad2aec08420d] <==
	I0612 21:43:27.793337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0612 21:43:27.807553       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0612 21:43:27.807656       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0612 21:43:27.819125       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0612 21:43:27.819624       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-591460_838604cf-6703-4879-a7a7-57d5015a543a!
	I0612 21:43:27.824439       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"142195aa-ac84-4e90-b8a3-6644b794cbbe", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-591460_838604cf-6703-4879-a7a7-57d5015a543a became leader
	I0612 21:43:27.921158       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-591460_838604cf-6703-4879-a7a7-57d5015a543a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-591460 -n embed-certs-591460
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-591460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-r7fbt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-591460 describe pod metrics-server-569cc877fc-r7fbt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-591460 describe pod metrics-server-569cc877fc-r7fbt: exit status 1 (62.197095ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-r7fbt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-591460 describe pod metrics-server-569cc877fc-r7fbt: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0612 21:44:56.704781   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 21:45:04.134242   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:45:34.343281   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:46:26.517771   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
E0612 21:46:27.181600   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-087875 -n no-preload-087875
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-12 21:53:22.947238611 +0000 UTC m=+6154.561688992
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-087875 -n no-preload-087875
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-087875 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-087875 logs -n 25: (2.159287924s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| delete  | -p bridge-701638                                       | bridge-701638                | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-576552 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | disable-driver-mounts-576552                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-087875             | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-376087  | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-591460            | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-983302        | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-087875                  | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-376087       | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:42 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-591460                 | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-983302             | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:33:52
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:33:52.855557   80762 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:33:52.855829   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.855839   80762 out.go:304] Setting ErrFile to fd 2...
	I0612 21:33:52.855845   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.856037   80762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:33:52.856582   80762 out.go:298] Setting JSON to false
	I0612 21:33:52.857472   80762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8178,"bootTime":1718219855,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:33:52.857527   80762 start.go:139] virtualization: kvm guest
	I0612 21:33:52.859369   80762 out.go:177] * [old-k8s-version-983302] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:33:52.860886   80762 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:33:52.860907   80762 notify.go:220] Checking for updates...
	I0612 21:33:52.862185   80762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:33:52.863642   80762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:33:52.865031   80762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:33:52.866306   80762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:33:52.867535   80762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:33:52.869148   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:33:52.869530   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.869597   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.884278   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0612 21:33:52.884743   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.885211   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.885234   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.885575   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.885768   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.887577   80762 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0612 21:33:52.888972   80762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:33:52.889265   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.889296   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.903649   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
	I0612 21:33:52.904087   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.904500   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.904518   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.904831   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.904988   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.939030   80762 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:33:52.940484   80762 start.go:297] selected driver: kvm2
	I0612 21:33:52.940497   80762 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.940622   80762 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:33:52.941314   80762 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.941389   80762 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:33:52.956273   80762 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:33:52.956646   80762 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:33:52.956674   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:33:52.956682   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:33:52.956715   80762 start.go:340] cluster config:
	{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.956828   80762 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.958634   80762 out.go:177] * Starting "old-k8s-version-983302" primary control-plane node in "old-k8s-version-983302" cluster
	I0612 21:33:52.959924   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:33:52.959963   80762 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0612 21:33:52.959970   80762 cache.go:56] Caching tarball of preloaded images
	I0612 21:33:52.960065   80762 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:33:52.960079   80762 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0612 21:33:52.960190   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:33:52.960397   80762 start.go:360] acquireMachinesLock for old-k8s-version-983302: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:33:57.423439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:00.495475   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:06.575478   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:09.647560   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:15.727510   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:18.799491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:24.879423   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:27.951495   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:34.031457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:37.103569   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:43.183470   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:46.255491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:52.335452   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:55.407544   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:01.487489   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:04.559546   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:10.639492   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:13.711372   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:19.791460   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:22.863455   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:28.943506   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:32.015443   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:38.095436   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:41.167526   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:47.247485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:50.319435   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:56.399471   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:59.471485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:05.551493   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:08.623467   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:14.703401   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:17.775479   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:23.855516   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:26.927418   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:33.007439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:36.079449   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:42.159480   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:45.231482   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:51.311424   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:54.383524   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:00.463466   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:03.535465   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:09.615457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:12.687462   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:18.767463   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:21.839431   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:24.843967   80243 start.go:364] duration metric: took 4m34.377488728s to acquireMachinesLock for "default-k8s-diff-port-376087"
	I0612 21:37:24.844034   80243 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:24.844046   80243 fix.go:54] fixHost starting: 
	I0612 21:37:24.844649   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:24.844689   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:24.859743   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0612 21:37:24.860227   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:24.860659   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:37:24.860680   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:24.861055   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:24.861352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:24.861550   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:37:24.863507   80243 fix.go:112] recreateIfNeeded on default-k8s-diff-port-376087: state=Stopped err=<nil>
	I0612 21:37:24.863538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	W0612 21:37:24.863708   80243 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:24.865564   80243 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-376087" ...
	I0612 21:37:24.866899   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Start
	I0612 21:37:24.867064   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring networks are active...
	I0612 21:37:24.867951   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network default is active
	I0612 21:37:24.868390   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network mk-default-k8s-diff-port-376087 is active
	I0612 21:37:24.868746   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Getting domain xml...
	I0612 21:37:24.869408   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Creating domain...
	I0612 21:37:24.841481   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:24.841529   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.841912   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:37:24.841938   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.842149   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:37:24.843818   80157 machine.go:97] duration metric: took 4m37.413209096s to provisionDockerMachine
	I0612 21:37:24.843853   80157 fix.go:56] duration metric: took 4m37.434262933s for fixHost
	I0612 21:37:24.843860   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 4m37.434303466s
	W0612 21:37:24.843897   80157 start.go:713] error starting host: provision: host is not running
	W0612 21:37:24.843971   80157 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0612 21:37:24.843980   80157 start.go:728] Will try again in 5 seconds ...
	I0612 21:37:26.077364   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting to get IP...
	I0612 21:37:26.078173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078646   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078686   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.078611   81491 retry.go:31] will retry after 224.429366ms: waiting for machine to come up
	I0612 21:37:26.305227   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305668   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305699   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.305627   81491 retry.go:31] will retry after 298.325251ms: waiting for machine to come up
	I0612 21:37:26.605155   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605587   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605622   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.605558   81491 retry.go:31] will retry after 327.789765ms: waiting for machine to come up
	I0612 21:37:26.935066   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935536   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935567   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.935477   81491 retry.go:31] will retry after 381.56012ms: waiting for machine to come up
	I0612 21:37:27.319036   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.319429   81491 retry.go:31] will retry after 474.663822ms: waiting for machine to come up
	I0612 21:37:27.796149   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796596   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796635   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.796564   81491 retry.go:31] will retry after 943.868595ms: waiting for machine to come up
	I0612 21:37:28.741715   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742226   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:28.742180   81491 retry.go:31] will retry after 1.014472282s: waiting for machine to come up
	I0612 21:37:29.758384   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758928   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758947   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:29.758867   81491 retry.go:31] will retry after 971.872729ms: waiting for machine to come up
	I0612 21:37:29.845647   80157 start.go:360] acquireMachinesLock for no-preload-087875: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:37:30.732362   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732794   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:30.732742   81491 retry.go:31] will retry after 1.352202491s: waiting for machine to come up
	I0612 21:37:32.087272   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087702   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087726   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:32.087663   81491 retry.go:31] will retry after 2.276552983s: waiting for machine to come up
	I0612 21:37:34.367159   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367579   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367613   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:34.367520   81491 retry.go:31] will retry after 1.785262755s: waiting for machine to come up
	I0612 21:37:36.154927   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155412   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:36.155357   81491 retry.go:31] will retry after 3.309693081s: waiting for machine to come up
	I0612 21:37:39.468800   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469443   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469469   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:39.469393   81491 retry.go:31] will retry after 4.284995408s: waiting for machine to come up
	I0612 21:37:45.096430   80404 start.go:364] duration metric: took 4m40.295909999s to acquireMachinesLock for "embed-certs-591460"
	I0612 21:37:45.096485   80404 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:45.096490   80404 fix.go:54] fixHost starting: 
	I0612 21:37:45.096932   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:45.096972   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:45.113819   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
	I0612 21:37:45.114290   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:45.114823   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:37:45.114843   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:45.115208   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:45.115415   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:37:45.115578   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:37:45.117131   80404 fix.go:112] recreateIfNeeded on embed-certs-591460: state=Stopped err=<nil>
	I0612 21:37:45.117156   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	W0612 21:37:45.117324   80404 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:45.119535   80404 out.go:177] * Restarting existing kvm2 VM for "embed-certs-591460" ...
	I0612 21:37:43.759195   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759548   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Found IP for machine: 192.168.61.80
	I0612 21:37:43.759575   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has current primary IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759583   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserving static IP address...
	I0612 21:37:43.760031   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserved static IP address: 192.168.61.80
	I0612 21:37:43.760063   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.760075   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for SSH to be available...
	I0612 21:37:43.760120   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | skip adding static IP to network mk-default-k8s-diff-port-376087 - found existing host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"}
	I0612 21:37:43.760134   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Getting to WaitForSSH function...
	I0612 21:37:43.762259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.762626   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762741   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH client type: external
	I0612 21:37:43.762771   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa (-rw-------)
	I0612 21:37:43.762804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:37:43.762842   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | About to run SSH command:
	I0612 21:37:43.762860   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | exit 0
	I0612 21:37:43.891446   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | SSH cmd err, output: <nil>: 
	I0612 21:37:43.891831   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetConfigRaw
	I0612 21:37:43.892485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:43.895220   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895625   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.895656   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895928   80243 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/config.json ...
	I0612 21:37:43.896140   80243 machine.go:94] provisionDockerMachine start ...
	I0612 21:37:43.896161   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:43.896388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:43.898898   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899317   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.899346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899539   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:43.899727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.899868   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.900019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:43.900171   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:43.900360   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:43.900371   80243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:37:44.016295   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:37:44.016327   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016577   80243 buildroot.go:166] provisioning hostname "default-k8s-diff-port-376087"
	I0612 21:37:44.016602   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.019396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019732   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.019763   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019881   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.020084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020214   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020418   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.020612   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.020803   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.020820   80243 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376087 && echo "default-k8s-diff-port-376087" | sudo tee /etc/hostname
	I0612 21:37:44.146019   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376087
	
	I0612 21:37:44.146049   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.148758   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149204   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.149238   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149356   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.149538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149873   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.150013   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.150187   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.150204   80243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376087/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:37:44.272821   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:44.272852   80243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:37:44.272887   80243 buildroot.go:174] setting up certificates
	I0612 21:37:44.272895   80243 provision.go:84] configureAuth start
	I0612 21:37:44.272903   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.273185   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:44.275991   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276337   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.276366   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276591   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.279011   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279370   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.279396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279521   80243 provision.go:143] copyHostCerts
	I0612 21:37:44.279576   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:37:44.279585   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:37:44.279649   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:37:44.279740   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:37:44.279748   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:37:44.279770   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:37:44.279828   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:37:44.279835   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:37:44.279855   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:37:44.279914   80243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376087 san=[127.0.0.1 192.168.61.80 default-k8s-diff-port-376087 localhost minikube]
	I0612 21:37:44.410909   80243 provision.go:177] copyRemoteCerts
	I0612 21:37:44.410974   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:37:44.410999   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.413740   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414140   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.414173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414406   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.414597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.414759   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.414904   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.501641   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:37:44.526082   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0612 21:37:44.549455   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:37:44.572447   80243 provision.go:87] duration metric: took 299.539656ms to configureAuth
	I0612 21:37:44.572473   80243 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:37:44.572632   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:37:44.572731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.575518   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.575913   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.575948   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.576170   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.576383   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576553   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576754   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.576913   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.577134   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.577155   80243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:37:44.851891   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:37:44.851922   80243 machine.go:97] duration metric: took 955.766062ms to provisionDockerMachine
	I0612 21:37:44.851936   80243 start.go:293] postStartSetup for "default-k8s-diff-port-376087" (driver="kvm2")
	I0612 21:37:44.851951   80243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:37:44.851970   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:44.852318   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:37:44.852352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.855231   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855556   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.855595   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.855935   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.856127   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.856260   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.941821   80243 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:37:44.946013   80243 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:37:44.946052   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:37:44.946120   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:37:44.946200   80243 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:37:44.946281   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:37:44.955467   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:44.979379   80243 start.go:296] duration metric: took 127.428385ms for postStartSetup
	I0612 21:37:44.979421   80243 fix.go:56] duration metric: took 20.135375416s for fixHost
	I0612 21:37:44.979445   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.981891   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.982287   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982520   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.982713   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.982920   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.983040   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.983220   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.983450   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.983467   80243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:37:45.096266   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228265.072559389
	
	I0612 21:37:45.096288   80243 fix.go:216] guest clock: 1718228265.072559389
	I0612 21:37:45.096295   80243 fix.go:229] Guest: 2024-06-12 21:37:45.072559389 +0000 UTC Remote: 2024-06-12 21:37:44.979426071 +0000 UTC m=+294.653210040 (delta=93.133318ms)
	I0612 21:37:45.096313   80243 fix.go:200] guest clock delta is within tolerance: 93.133318ms
	I0612 21:37:45.096318   80243 start.go:83] releasing machines lock for "default-k8s-diff-port-376087", held for 20.252307995s
	I0612 21:37:45.096346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.096683   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:45.099332   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099761   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.099805   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099902   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100560   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100841   80243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:37:45.100880   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.100981   80243 ssh_runner.go:195] Run: cat /version.json
	I0612 21:37:45.101007   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.103590   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.103774   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104052   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104186   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104202   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104210   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104417   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104430   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104650   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104651   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104837   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104852   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.104993   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.208199   80243 ssh_runner.go:195] Run: systemctl --version
	I0612 21:37:45.214375   80243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:37:45.370991   80243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:37:45.378676   80243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:37:45.378744   80243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:37:45.400622   80243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:37:45.400642   80243 start.go:494] detecting cgroup driver to use...
	I0612 21:37:45.400709   80243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:37:45.416775   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:37:45.430261   80243 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:37:45.430314   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:37:45.445482   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:37:45.461471   80243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:37:45.578411   80243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:37:45.750493   80243 docker.go:233] disabling docker service ...
	I0612 21:37:45.750556   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:37:45.769072   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:37:45.784755   80243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:37:45.907970   80243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:37:46.031847   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:37:46.046473   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:37:46.067764   80243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:37:46.067813   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.080604   80243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:37:46.080660   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.093611   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.104443   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.117070   80243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:37:46.128759   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.139977   80243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.157893   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.168896   80243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:37:46.179765   80243 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:37:46.179816   80243 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:37:46.194059   80243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:37:46.205474   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:46.322562   80243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:37:46.479073   80243 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:37:46.479149   80243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:37:46.484557   80243 start.go:562] Will wait 60s for crictl version
	I0612 21:37:46.484609   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:37:46.488403   80243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:37:46.529210   80243 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:37:46.529301   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.561476   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.594477   80243 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:37:45.120900   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Start
	I0612 21:37:45.121084   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring networks are active...
	I0612 21:37:45.121776   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network default is active
	I0612 21:37:45.122108   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network mk-embed-certs-591460 is active
	I0612 21:37:45.122554   80404 main.go:141] libmachine: (embed-certs-591460) Getting domain xml...
	I0612 21:37:45.123260   80404 main.go:141] libmachine: (embed-certs-591460) Creating domain...
	I0612 21:37:46.357867   80404 main.go:141] libmachine: (embed-certs-591460) Waiting to get IP...
	I0612 21:37:46.358704   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.359164   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.359265   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.359144   81627 retry.go:31] will retry after 278.948395ms: waiting for machine to come up
	I0612 21:37:46.639971   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.640491   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.640523   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.640433   81627 retry.go:31] will retry after 342.550517ms: waiting for machine to come up
	I0612 21:37:46.985065   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.985590   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.985618   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.985548   81627 retry.go:31] will retry after 297.683214ms: waiting for machine to come up
	I0612 21:37:47.285192   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.285650   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.285688   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.285615   81627 retry.go:31] will retry after 415.994572ms: waiting for machine to come up
	I0612 21:37:47.702894   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.703398   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.703424   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.703353   81627 retry.go:31] will retry after 672.441633ms: waiting for machine to come up
	I0612 21:37:48.377227   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:48.377772   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:48.377802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:48.377735   81627 retry.go:31] will retry after 790.165478ms: waiting for machine to come up
	I0612 21:37:49.169651   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:49.170194   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:49.170224   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:49.170134   81627 retry.go:31] will retry after 953.609739ms: waiting for machine to come up
	I0612 21:37:46.595772   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:46.599221   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599682   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:46.599712   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599919   80243 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0612 21:37:46.604573   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:46.617274   80243 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:37:46.617388   80243 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:37:46.617443   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:46.663227   80243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:37:46.663306   80243 ssh_runner.go:195] Run: which lz4
	I0612 21:37:46.667878   80243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:37:46.672384   80243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:37:46.672416   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:37:48.195844   80243 crio.go:462] duration metric: took 1.527996646s to copy over tarball
	I0612 21:37:48.195908   80243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:37:50.125800   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:50.126305   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:50.126337   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:50.126260   81627 retry.go:31] will retry after 938.251336ms: waiting for machine to come up
	I0612 21:37:51.065851   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:51.066225   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:51.066247   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:51.066194   81627 retry.go:31] will retry after 1.635454683s: waiting for machine to come up
	I0612 21:37:52.704193   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:52.704663   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:52.704687   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:52.704633   81627 retry.go:31] will retry after 1.56455027s: waiting for machine to come up
	I0612 21:37:54.271391   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:54.271873   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:54.271919   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:54.271826   81627 retry.go:31] will retry after 2.052574222s: waiting for machine to come up
	I0612 21:37:50.464553   80243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.268615304s)
	I0612 21:37:50.464601   80243 crio.go:469] duration metric: took 2.268715227s to extract the tarball
	I0612 21:37:50.464612   80243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:37:50.502406   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:50.550796   80243 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:37:50.550821   80243 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:37:50.550831   80243 kubeadm.go:928] updating node { 192.168.61.80 8444 v1.30.1 crio true true} ...
	I0612 21:37:50.550957   80243 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:37:50.551042   80243 ssh_runner.go:195] Run: crio config
	I0612 21:37:50.603232   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:50.603256   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:50.603268   80243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:37:50.603299   80243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.80 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376087 NodeName:default-k8s-diff-port-376087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:37:50.603459   80243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.80
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-376087"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:37:50.603524   80243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:37:50.614003   80243 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:37:50.614082   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:37:50.623416   80243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0612 21:37:50.640203   80243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:37:50.656668   80243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0612 21:37:50.674601   80243 ssh_runner.go:195] Run: grep 192.168.61.80	control-plane.minikube.internal$ /etc/hosts
	I0612 21:37:50.678858   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:50.692389   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:50.822225   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:37:50.840703   80243 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087 for IP: 192.168.61.80
	I0612 21:37:50.840734   80243 certs.go:194] generating shared ca certs ...
	I0612 21:37:50.840758   80243 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:37:50.840936   80243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:37:50.840986   80243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:37:50.840999   80243 certs.go:256] generating profile certs ...
	I0612 21:37:50.841133   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/client.key
	I0612 21:37:50.841200   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key.0afce446
	I0612 21:37:50.841238   80243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key
	I0612 21:37:50.841357   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:37:50.841398   80243 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:37:50.841409   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:37:50.841438   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:37:50.841469   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:37:50.841489   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:37:50.841529   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:50.842311   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:37:50.880075   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:37:50.914504   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:37:50.945724   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:37:50.975702   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0612 21:37:51.009817   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:37:51.039086   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:37:51.064146   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:37:51.088483   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:37:51.112785   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:37:51.136192   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:37:51.159239   80243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:37:51.175719   80243 ssh_runner.go:195] Run: openssl version
	I0612 21:37:51.181707   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:37:51.193498   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198415   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198475   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.204601   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:37:51.216354   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:37:51.231979   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.236952   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.237018   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.243461   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:37:51.258481   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:37:51.273412   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279356   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279420   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.285551   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:37:51.298066   80243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:37:51.302791   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:37:51.309402   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:37:51.316170   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:37:51.322785   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:37:51.329066   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:37:51.335031   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:37:51.340945   80243 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:37:51.341082   80243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:37:51.341143   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.383011   80243 cri.go:89] found id: ""
	I0612 21:37:51.383134   80243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:37:51.394768   80243 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:37:51.394794   80243 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:37:51.394800   80243 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:37:51.394852   80243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:37:51.408147   80243 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:37:51.409094   80243 kubeconfig.go:125] found "default-k8s-diff-port-376087" server: "https://192.168.61.80:8444"
	I0612 21:37:51.411221   80243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:37:51.421897   80243 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.80
	I0612 21:37:51.421934   80243 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:37:51.421949   80243 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:37:51.422029   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.470321   80243 cri.go:89] found id: ""
	I0612 21:37:51.470441   80243 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:37:51.488369   80243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:37:51.498367   80243 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:37:51.498388   80243 kubeadm.go:156] found existing configuration files:
	
	I0612 21:37:51.498449   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0612 21:37:51.510212   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:37:51.510287   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:37:51.520231   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0612 21:37:51.529270   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:37:51.529339   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:37:51.538902   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.548593   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:37:51.548652   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.558533   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0612 21:37:51.567995   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:37:51.568063   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:37:51.577695   80243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:37:51.587794   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:51.718155   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.602448   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.820456   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.901167   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.977502   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:37:52.977606   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.477802   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.977879   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.995753   80243 api_server.go:72] duration metric: took 1.018251882s to wait for apiserver process to appear ...
	I0612 21:37:53.995788   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:37:53.995812   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:53.996308   80243 api_server.go:269] stopped: https://192.168.61.80:8444/healthz: Get "https://192.168.61.80:8444/healthz": dial tcp 192.168.61.80:8444: connect: connection refused
	I0612 21:37:54.496045   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.293362   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:37:57.293394   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:37:57.293408   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.395854   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.395886   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.496122   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.505090   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.505124   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.996334   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.000606   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:58.000646   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:58.496177   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.504422   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:37:58.513123   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:37:58.513150   80243 api_server.go:131] duration metric: took 4.517354722s to wait for apiserver health ...
	I0612 21:37:58.513158   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:58.513163   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:58.514696   80243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:37:56.325937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:56.326316   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:56.326343   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:56.326261   81627 retry.go:31] will retry after 3.51636746s: waiting for machine to come up
	I0612 21:37:58.516091   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:37:58.541034   80243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:37:58.585635   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:37:58.596829   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:37:58.596859   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:37:58.596867   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:37:58.596873   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:37:58.596883   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:37:58.596888   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 21:37:58.596893   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:37:58.596899   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:37:58.596904   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:37:58.596910   80243 system_pods.go:74] duration metric: took 11.248328ms to wait for pod list to return data ...
	I0612 21:37:58.596917   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:37:58.600081   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:37:58.600107   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:37:58.600119   80243 node_conditions.go:105] duration metric: took 3.197181ms to run NodePressure ...
	I0612 21:37:58.600134   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:58.911963   80243 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918455   80243 kubeadm.go:733] kubelet initialised
	I0612 21:37:58.918475   80243 kubeadm.go:734] duration metric: took 6.490654ms waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918482   80243 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:37:58.924427   80243 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.930290   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930329   80243 pod_ready.go:81] duration metric: took 5.86525ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.930339   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930346   80243 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.935394   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935416   80243 pod_ready.go:81] duration metric: took 5.061639ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.935426   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935431   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.940238   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940268   80243 pod_ready.go:81] duration metric: took 4.829842ms for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.940286   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940295   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.989649   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989686   80243 pod_ready.go:81] duration metric: took 49.380431ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.989702   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989711   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.389868   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389903   80243 pod_ready.go:81] duration metric: took 400.174877ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.389912   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389918   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.790398   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790425   80243 pod_ready.go:81] duration metric: took 400.499157ms for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.790435   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790449   80243 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:00.189506   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189533   80243 pod_ready.go:81] duration metric: took 399.075983ms for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:00.189551   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189559   80243 pod_ready.go:38] duration metric: took 1.271068537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:00.189574   80243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:38:00.201480   80243 ops.go:34] apiserver oom_adj: -16
	I0612 21:38:00.201504   80243 kubeadm.go:591] duration metric: took 8.806697524s to restartPrimaryControlPlane
	I0612 21:38:00.201514   80243 kubeadm.go:393] duration metric: took 8.860579681s to StartCluster
	I0612 21:38:00.201536   80243 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.201601   80243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:00.203106   80243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.203416   80243 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:38:00.205568   80243 out.go:177] * Verifying Kubernetes components...
	I0612 21:38:00.203448   80243 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:38:00.203614   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:00.207110   80243 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207120   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:00.207120   80243 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207143   80243 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207166   80243 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207193   80243 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:38:00.207187   80243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376087"
	I0612 21:38:00.207208   80243 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207222   80243 addons.go:243] addon metrics-server should already be in state true
	I0612 21:38:00.207230   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207263   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207490   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207511   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207519   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207544   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207553   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207572   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.222521   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0612 21:38:00.222979   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.223496   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.223523   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.223899   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.224519   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.224555   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.227511   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0612 21:38:00.227543   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33041
	I0612 21:38:00.227874   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.227930   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.228402   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228409   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228426   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228471   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228776   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228780   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228952   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.229291   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.229323   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.232640   80243 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.232662   80243 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:38:00.232690   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.233072   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.233103   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.240883   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38355
	I0612 21:38:00.241363   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.241839   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.241861   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.242217   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.242434   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.244544   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.244604   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0612 21:38:00.246924   80243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:38:00.244915   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.248406   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:38:00.248430   80243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:38:00.248451   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.248861   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.248887   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.249211   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.249431   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.251070   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.251137   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0612 21:38:00.252729   80243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:00.251644   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.252033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.252601   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.254033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.254079   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.254111   80243 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.254127   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:38:00.254148   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.254211   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.254399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.254515   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.254542   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.254712   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.254926   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.256878   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.256948   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.257836   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258073   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.258105   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.258993   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.259141   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.259283   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.272822   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I0612 21:38:00.273238   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.273710   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.273734   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.274221   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.274411   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.276056   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.276286   80243 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.276302   80243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:38:00.276323   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.279285   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279351   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.279400   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.279675   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.279809   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.279940   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.392656   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:00.411972   80243 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:00.502108   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.504572   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:38:00.504590   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:38:00.522021   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:38:00.522057   80243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:38:00.538366   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.541981   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:00.541999   80243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:38:00.561335   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:01.519955   80243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017815416s)
	I0612 21:38:01.520006   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520087   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520100   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520312   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520334   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520343   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520350   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520422   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520435   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520444   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520452   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520554   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520573   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520647   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520678   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.520680   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.528807   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.528827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.529143   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.529162   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.529166   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556376   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.556701   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556750   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.556762   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.556780   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556791   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.557157   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.557179   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.557190   80243 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-376087"
	I0612 21:38:01.559103   80243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:37:59.844024   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:59.844481   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:59.844505   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:59.844433   81627 retry.go:31] will retry after 3.77902453s: waiting for machine to come up
	I0612 21:38:03.626861   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627380   80404 main.go:141] libmachine: (embed-certs-591460) Found IP for machine: 192.168.39.147
	I0612 21:38:03.627399   80404 main.go:141] libmachine: (embed-certs-591460) Reserving static IP address...
	I0612 21:38:03.627416   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has current primary IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627917   80404 main.go:141] libmachine: (embed-certs-591460) Reserved static IP address: 192.168.39.147
	I0612 21:38:03.627964   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.627981   80404 main.go:141] libmachine: (embed-certs-591460) Waiting for SSH to be available...
	I0612 21:38:03.628011   80404 main.go:141] libmachine: (embed-certs-591460) DBG | skip adding static IP to network mk-embed-certs-591460 - found existing host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"}
	I0612 21:38:03.628030   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Getting to WaitForSSH function...
	I0612 21:38:03.630082   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630480   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.630581   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630762   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH client type: external
	I0612 21:38:03.630802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa (-rw-------)
	I0612 21:38:03.630846   80404 main.go:141] libmachine: (embed-certs-591460) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:03.630872   80404 main.go:141] libmachine: (embed-certs-591460) DBG | About to run SSH command:
	I0612 21:38:03.630882   80404 main.go:141] libmachine: (embed-certs-591460) DBG | exit 0
	I0612 21:38:03.755304   80404 main.go:141] libmachine: (embed-certs-591460) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:03.755720   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetConfigRaw
	I0612 21:38:03.756310   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:03.758608   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.758927   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.758966   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.759153   80404 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/config.json ...
	I0612 21:38:03.759390   80404 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:03.759414   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:03.759641   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.761954   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762215   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.762244   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762371   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.762525   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762689   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762842   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.762995   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.763183   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.763206   80404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:03.867900   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:03.867936   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868185   80404 buildroot.go:166] provisioning hostname "embed-certs-591460"
	I0612 21:38:03.868210   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868430   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.871347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871690   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.871721   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871816   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.871977   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872130   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872258   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.872408   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.872588   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.872604   80404 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-591460 && echo "embed-certs-591460" | sudo tee /etc/hostname
	I0612 21:38:03.990526   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-591460
	
	I0612 21:38:03.990550   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.993057   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993458   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.993485   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993646   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.993830   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.993985   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.994125   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.994297   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.994499   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.994524   80404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-591460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-591460/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-591460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:04.120595   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:04.120623   80404 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:04.120640   80404 buildroot.go:174] setting up certificates
	I0612 21:38:04.120650   80404 provision.go:84] configureAuth start
	I0612 21:38:04.120658   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:04.120910   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.123483   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.123854   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.123879   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.124153   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.126901   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127293   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.127318   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127494   80404 provision.go:143] copyHostCerts
	I0612 21:38:04.127554   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:04.127566   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:04.127635   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:04.127736   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:04.127747   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:04.127785   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:04.127860   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:04.127870   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:04.127896   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:04.127960   80404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.embed-certs-591460 san=[127.0.0.1 192.168.39.147 embed-certs-591460 localhost minikube]
	I0612 21:38:04.265296   80404 provision.go:177] copyRemoteCerts
	I0612 21:38:04.265361   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:04.265392   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.267703   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268044   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.268090   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268244   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.268421   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.268583   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.268780   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.349440   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:04.374868   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0612 21:38:04.398419   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:04.423319   80404 provision.go:87] duration metric: took 302.657777ms to configureAuth
	I0612 21:38:04.423353   80404 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:04.423514   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:04.423586   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.426301   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426612   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.426641   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426796   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.426971   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427186   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427331   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.427553   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.427723   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.427739   80404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:04.689161   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:04.689199   80404 machine.go:97] duration metric: took 929.790838ms to provisionDockerMachine
	I0612 21:38:04.689212   80404 start.go:293] postStartSetup for "embed-certs-591460" (driver="kvm2")
	I0612 21:38:04.689223   80404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:04.689242   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.689569   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:04.689616   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.692484   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.692841   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.692864   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.693002   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.693191   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.693326   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.693469   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.923975   80762 start.go:364] duration metric: took 4m11.963543792s to acquireMachinesLock for "old-k8s-version-983302"
	I0612 21:38:04.924056   80762 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:04.924068   80762 fix.go:54] fixHost starting: 
	I0612 21:38:04.924507   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:04.924543   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:04.942022   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0612 21:38:04.942428   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:04.942891   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:38:04.942917   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:04.943345   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:04.943553   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:04.943726   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetState
	I0612 21:38:04.945403   80762 fix.go:112] recreateIfNeeded on old-k8s-version-983302: state=Stopped err=<nil>
	I0612 21:38:04.945427   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	W0612 21:38:04.945581   80762 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:04.947672   80762 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-983302" ...
	I0612 21:38:01.560387   80243 addons.go:510] duration metric: took 1.356939902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:38:02.416070   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.416451   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.774287   80404 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:04.778568   80404 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:04.778596   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:04.778667   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:04.778740   80404 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:04.778819   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:04.788602   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:04.813969   80404 start.go:296] duration metric: took 124.741162ms for postStartSetup
	I0612 21:38:04.814020   80404 fix.go:56] duration metric: took 19.717527303s for fixHost
	I0612 21:38:04.814049   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.816907   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817268   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.817294   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817511   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.817728   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.817905   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.818087   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.818293   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.818501   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.818516   80404 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:04.923846   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228284.879920542
	
	I0612 21:38:04.923868   80404 fix.go:216] guest clock: 1718228284.879920542
	I0612 21:38:04.923874   80404 fix.go:229] Guest: 2024-06-12 21:38:04.879920542 +0000 UTC Remote: 2024-06-12 21:38:04.814026698 +0000 UTC m=+300.152179547 (delta=65.893844ms)
	I0612 21:38:04.923890   80404 fix.go:200] guest clock delta is within tolerance: 65.893844ms
	I0612 21:38:04.923894   80404 start.go:83] releasing machines lock for "embed-certs-591460", held for 19.827427255s
	I0612 21:38:04.923920   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.924155   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.926708   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927102   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.927146   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927281   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927788   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927955   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.928043   80404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:04.928099   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.928158   80404 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:04.928182   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.930931   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931237   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931377   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931415   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931561   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931587   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931592   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931742   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931790   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.932111   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.932127   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.932250   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:05.009184   80404 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:05.035746   80404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:05.181527   80404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:05.189035   80404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:05.189113   80404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:05.205860   80404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:05.205886   80404 start.go:494] detecting cgroup driver to use...
	I0612 21:38:05.205957   80404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:05.223913   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:05.239598   80404 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:05.239679   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:05.253501   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:05.268094   80404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:05.397260   80404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:05.560454   80404 docker.go:233] disabling docker service ...
	I0612 21:38:05.560532   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:05.579197   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:05.593420   80404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:05.728145   80404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:05.860041   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:05.876025   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:05.895242   80404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:05.895336   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.906575   80404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:05.906662   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.918248   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.929178   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.942169   80404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:05.953542   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.969045   80404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.989509   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:06.001532   80404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:06.012676   80404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:06.012740   80404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:06.030028   80404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:06.048168   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:06.190039   80404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:06.349088   80404 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:06.349151   80404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:06.355251   80404 start.go:562] Will wait 60s for crictl version
	I0612 21:38:06.355321   80404 ssh_runner.go:195] Run: which crictl
	I0612 21:38:06.359456   80404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:06.400450   80404 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:06.400525   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.430078   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.461616   80404 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:04.949078   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .Start
	I0612 21:38:04.949226   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring networks are active...
	I0612 21:38:04.949936   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network default is active
	I0612 21:38:04.950371   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network mk-old-k8s-version-983302 is active
	I0612 21:38:04.950813   80762 main.go:141] libmachine: (old-k8s-version-983302) Getting domain xml...
	I0612 21:38:04.951549   80762 main.go:141] libmachine: (old-k8s-version-983302) Creating domain...
	I0612 21:38:06.296150   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting to get IP...
	I0612 21:38:06.296978   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.297465   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.297570   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.297453   81824 retry.go:31] will retry after 256.609938ms: waiting for machine to come up
	I0612 21:38:06.556307   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.556935   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.556967   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.556884   81824 retry.go:31] will retry after 285.754887ms: waiting for machine to come up
	I0612 21:38:06.844674   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.845227   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.845255   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.845171   81824 retry.go:31] will retry after 326.266367ms: waiting for machine to come up
	I0612 21:38:07.172788   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.173414   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.173447   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.173353   81824 retry.go:31] will retry after 393.443927ms: waiting for machine to come up
	I0612 21:38:07.568084   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.568645   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.568673   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.568609   81824 retry.go:31] will retry after 726.66775ms: waiting for machine to come up
	I0612 21:38:06.462860   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:06.466111   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466521   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:06.466551   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466837   80404 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:06.471361   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:06.485595   80404 kubeadm.go:877] updating cluster {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:06.485718   80404 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:06.485761   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:06.528708   80404 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:06.528778   80404 ssh_runner.go:195] Run: which lz4
	I0612 21:38:06.533340   80404 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:06.538076   80404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:06.538115   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:38:08.078495   80404 crio.go:462] duration metric: took 1.545201872s to copy over tarball
	I0612 21:38:08.078573   80404 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:06.917632   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:07.916734   80243 node_ready.go:49] node "default-k8s-diff-port-376087" has status "Ready":"True"
	I0612 21:38:07.916763   80243 node_ready.go:38] duration metric: took 7.504763576s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:07.916775   80243 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:07.924249   80243 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931751   80243 pod_ready.go:92] pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.931773   80243 pod_ready.go:81] duration metric: took 7.493608ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931782   80243 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937804   80243 pod_ready.go:92] pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.937880   80243 pod_ready.go:81] duration metric: took 6.090191ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937904   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:09.944927   80243 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:08.296811   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.297295   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.297319   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.297250   81824 retry.go:31] will retry after 658.540746ms: waiting for machine to come up
	I0612 21:38:08.957164   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.957611   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.957635   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.957576   81824 retry.go:31] will retry after 921.725713ms: waiting for machine to come up
	I0612 21:38:09.880881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:09.881672   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:09.881703   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:09.881604   81824 retry.go:31] will retry after 1.355846361s: waiting for machine to come up
	I0612 21:38:11.238616   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:11.239058   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:11.239094   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:11.238996   81824 retry.go:31] will retry after 1.3469357s: waiting for machine to come up
	I0612 21:38:12.587245   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:12.587747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:12.587785   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:12.587683   81824 retry.go:31] will retry after 1.616666063s: waiting for machine to come up
	I0612 21:38:10.426384   80404 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.347778968s)
	I0612 21:38:10.426418   80404 crio.go:469] duration metric: took 2.347893056s to extract the tarball
	I0612 21:38:10.426427   80404 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:10.472235   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:10.522846   80404 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:38:10.522869   80404 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:38:10.522876   80404 kubeadm.go:928] updating node { 192.168.39.147 8443 v1.30.1 crio true true} ...
	I0612 21:38:10.523007   80404 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-591460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:10.523163   80404 ssh_runner.go:195] Run: crio config
	I0612 21:38:10.577165   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:10.577193   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:10.577209   80404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:10.577244   80404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-591460 NodeName:embed-certs-591460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:38:10.577400   80404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-591460"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:10.577479   80404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:38:10.587499   80404 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:10.587573   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:10.597410   80404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0612 21:38:10.614617   80404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:10.632222   80404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0612 21:38:10.649693   80404 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:10.653639   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:10.666501   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:10.802679   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:10.820975   80404 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460 for IP: 192.168.39.147
	I0612 21:38:10.821001   80404 certs.go:194] generating shared ca certs ...
	I0612 21:38:10.821022   80404 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:10.821187   80404 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:10.821233   80404 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:10.821243   80404 certs.go:256] generating profile certs ...
	I0612 21:38:10.821326   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/client.key
	I0612 21:38:10.821402   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key.3b2e21e0
	I0612 21:38:10.821440   80404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key
	I0612 21:38:10.821575   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:10.821616   80404 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:10.821626   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:10.821655   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:10.821706   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:10.821751   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:10.821812   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:10.822621   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:10.879261   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:10.924352   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:10.961294   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:10.993792   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0612 21:38:11.039515   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:11.063161   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:11.086759   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:38:11.109693   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:11.133083   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:11.155716   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:11.181860   80404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:11.199989   80404 ssh_runner.go:195] Run: openssl version
	I0612 21:38:11.205811   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:11.216640   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221692   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221754   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.227829   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:11.239918   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:11.251648   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256123   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256176   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.261880   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:11.273184   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:11.284832   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289679   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289732   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.295338   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:11.306317   80404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:11.310737   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:11.320403   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:11.327756   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:11.333976   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:11.340200   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:11.346386   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:11.352268   80404 kubeadm.go:391] StartCluster: {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:11.352385   80404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:11.352435   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.390802   80404 cri.go:89] found id: ""
	I0612 21:38:11.390870   80404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:11.402604   80404 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:11.402626   80404 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:11.402630   80404 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:11.402682   80404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:11.413636   80404 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:11.414999   80404 kubeconfig.go:125] found "embed-certs-591460" server: "https://192.168.39.147:8443"
	I0612 21:38:11.417654   80404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:11.427456   80404 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.147
	I0612 21:38:11.427496   80404 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:11.427509   80404 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:11.427559   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.462135   80404 cri.go:89] found id: ""
	I0612 21:38:11.462211   80404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:11.478193   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:11.488816   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:11.488838   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:11.488899   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:11.498079   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:11.498154   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:11.508044   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:11.519721   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:11.519785   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:11.529554   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.538699   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:11.538750   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.548154   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:11.559980   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:11.560053   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:11.569737   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:11.579812   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:11.703454   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.773142   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069644541s)
	I0612 21:38:12.773183   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.991458   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.080268   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.207751   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:13.207934   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:13.708672   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.208389   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.268408   80404 api_server.go:72] duration metric: took 1.060631955s to wait for apiserver process to appear ...
	I0612 21:38:14.268443   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:38:14.268464   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:14.269096   80404 api_server.go:269] stopped: https://192.168.39.147:8443/healthz: Get "https://192.168.39.147:8443/healthz": dial tcp 192.168.39.147:8443: connect: connection refused
	I0612 21:38:10.445507   80243 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.445530   80243 pod_ready.go:81] duration metric: took 2.50760731s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.445542   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450290   80243 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.450310   80243 pod_ready.go:81] duration metric: took 4.759656ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450323   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454909   80243 pod_ready.go:92] pod "kube-proxy-8lrgv" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.454940   80243 pod_ready.go:81] duration metric: took 4.597123ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454951   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:12.587416   80243 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:13.505858   80243 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:13.505884   80243 pod_ready.go:81] duration metric: took 3.050925673s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:13.505896   80243 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:14.206281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:14.206781   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:14.206810   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:14.206716   81824 retry.go:31] will retry after 2.057638604s: waiting for machine to come up
	I0612 21:38:16.266372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:16.266920   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:16.266955   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:16.266858   81824 retry.go:31] will retry after 2.387834661s: waiting for machine to come up
	I0612 21:38:14.769114   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.056504   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.056539   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.056557   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.075356   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.075391   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.268731   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.277080   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.277111   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:17.768638   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.773438   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.773464   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:18.269037   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:18.273939   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:38:18.286895   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:38:18.286922   80404 api_server.go:131] duration metric: took 4.018473342s to wait for apiserver health ...
	I0612 21:38:18.286931   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:18.286937   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:18.288955   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:38:18.290619   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:38:18.305334   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:38:18.336590   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:38:18.351276   80404 system_pods.go:59] 8 kube-system pods found
	I0612 21:38:18.351320   80404 system_pods.go:61] "coredns-7db6d8ff4d-z99cq" [575689b8-3c51-45c8-874c-481e4b9db39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:38:18.351331   80404 system_pods.go:61] "etcd-embed-certs-591460" [190c1552-6bca-41f2-9ea9-e415e1ae9406] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:38:18.351342   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [c0fed28f-1d80-44eb-a66a-3a5b36704882] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:38:18.351350   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [79758f2a-2517-4a76-a3ae-536ac3adf781] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:38:18.351357   80404 system_pods.go:61] "kube-proxy-79kz5" [74ddb284-7cb2-46ec-ab9f-246dbfa0c4ec] Running
	I0612 21:38:18.351372   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [d9916521-fcc1-4bf1-8b03-8a5553f07bd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:38:18.351383   80404 system_pods.go:61] "metrics-server-569cc877fc-bkhxn" [f78482c8-82ea-4dbd-999f-2e4c73c98b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:38:18.351396   80404 system_pods.go:61] "storage-provisioner" [b3b117f7-ac44-4430-afb4-c6991ce1b71d] Running
	I0612 21:38:18.351407   80404 system_pods.go:74] duration metric: took 14.792966ms to wait for pod list to return data ...
	I0612 21:38:18.351419   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:38:18.357736   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:38:18.357769   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:38:18.357786   80404 node_conditions.go:105] duration metric: took 6.360028ms to run NodePressure ...
	I0612 21:38:18.357805   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:18.634312   80404 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638679   80404 kubeadm.go:733] kubelet initialised
	I0612 21:38:18.638700   80404 kubeadm.go:734] duration metric: took 4.362243ms waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638706   80404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:18.643840   80404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.648561   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648585   80404 pod_ready.go:81] duration metric: took 4.721795ms for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.648597   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648606   80404 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.654013   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654036   80404 pod_ready.go:81] duration metric: took 5.419602ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.654046   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654054   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.659445   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659468   80404 pod_ready.go:81] duration metric: took 5.404211ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.659479   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659487   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.741451   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741480   80404 pod_ready.go:81] duration metric: took 81.981354ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.741489   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741495   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140710   80404 pod_ready.go:92] pod "kube-proxy-79kz5" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:19.140734   80404 pod_ready.go:81] duration metric: took 399.230349ms for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140744   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:15.513300   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.013924   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:20.024841   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.656575   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:18.657074   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:18.657111   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:18.657022   81824 retry.go:31] will retry after 3.518256927s: waiting for machine to come up
	I0612 21:38:22.176416   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176901   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has current primary IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176930   80762 main.go:141] libmachine: (old-k8s-version-983302) Found IP for machine: 192.168.50.81
	I0612 21:38:22.176965   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserving static IP address...
	I0612 21:38:22.177385   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserved static IP address: 192.168.50.81
	I0612 21:38:22.177422   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.177435   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting for SSH to be available...
	I0612 21:38:22.177459   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | skip adding static IP to network mk-old-k8s-version-983302 - found existing host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"}
	I0612 21:38:22.177471   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Getting to WaitForSSH function...
	I0612 21:38:22.179728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180130   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.180158   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180273   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH client type: external
	I0612 21:38:22.180334   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa (-rw-------)
	I0612 21:38:22.180368   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:22.180387   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | About to run SSH command:
	I0612 21:38:22.180399   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | exit 0
	I0612 21:38:22.308621   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:22.308979   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetConfigRaw
	I0612 21:38:22.309620   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.312747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313124   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.313155   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313421   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:38:22.313635   80762 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:22.313658   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:22.313884   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.316476   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.316961   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.317014   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.317221   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.317408   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317600   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317775   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.317955   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.318195   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.318207   80762 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:22.431693   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:22.431728   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.431979   80762 buildroot.go:166] provisioning hostname "old-k8s-version-983302"
	I0612 21:38:22.432006   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.432191   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.434830   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435267   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.435300   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435431   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.435598   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435718   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435826   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.436056   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.436237   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.436252   80762 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-983302 && echo "old-k8s-version-983302" | sudo tee /etc/hostname
	I0612 21:38:22.563119   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-983302
	
	I0612 21:38:22.563184   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.565915   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.566315   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566513   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.566704   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.566885   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.567021   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.567243   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.567463   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.567490   80762 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-983302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-983302/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-983302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:22.690443   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:22.690474   80762 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:22.690494   80762 buildroot.go:174] setting up certificates
	I0612 21:38:22.690504   80762 provision.go:84] configureAuth start
	I0612 21:38:22.690514   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.690774   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.693186   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693528   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.693576   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693689   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.695948   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696285   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.696318   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696432   80762 provision.go:143] copyHostCerts
	I0612 21:38:22.696501   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:22.696521   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:22.696583   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:22.696662   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:22.696671   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:22.696693   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:22.696774   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:22.696784   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:22.696803   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:22.696847   80762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-983302 san=[127.0.0.1 192.168.50.81 localhost minikube old-k8s-version-983302]
	I0612 21:38:23.576378   80157 start.go:364] duration metric: took 53.730674695s to acquireMachinesLock for "no-preload-087875"
	I0612 21:38:23.576429   80157 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:23.576436   80157 fix.go:54] fixHost starting: 
	I0612 21:38:23.576844   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:23.576875   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:23.594879   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I0612 21:38:23.595284   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:23.595811   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:38:23.595836   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:23.596201   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:23.596404   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:23.596559   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:38:23.598372   80157 fix.go:112] recreateIfNeeded on no-preload-087875: state=Stopped err=<nil>
	I0612 21:38:23.598399   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	W0612 21:38:23.598558   80157 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:23.600649   80157 out.go:177] * Restarting existing kvm2 VM for "no-preload-087875" ...
	I0612 21:38:21.147354   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.147393   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:22.863618   80762 provision.go:177] copyRemoteCerts
	I0612 21:38:22.863672   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:22.863698   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.866979   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867371   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.867403   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867548   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.867734   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.867904   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.868126   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:22.958350   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:38:22.984409   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:23.009623   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0612 21:38:23.038026   80762 provision.go:87] duration metric: took 347.510898ms to configureAuth
	I0612 21:38:23.038063   80762 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:23.038309   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:38:23.038390   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.041196   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041634   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.041660   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041842   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.042044   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042222   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042410   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.042580   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.042780   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.042799   80762 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:23.324862   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:23.324893   80762 machine.go:97] duration metric: took 1.01124225s to provisionDockerMachine
	I0612 21:38:23.324904   80762 start.go:293] postStartSetup for "old-k8s-version-983302" (driver="kvm2")
	I0612 21:38:23.324913   80762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:23.324928   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.325240   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:23.325274   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.328007   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328343   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.328372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328578   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.328770   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.328939   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.329068   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.416040   80762 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:23.420586   80762 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:23.420607   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:23.420674   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:23.420739   80762 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:23.420823   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:23.432266   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:23.460619   80762 start.go:296] duration metric: took 135.703593ms for postStartSetup
	I0612 21:38:23.460661   80762 fix.go:56] duration metric: took 18.536593686s for fixHost
	I0612 21:38:23.460684   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.463415   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463745   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.463780   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463909   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.464110   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464248   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464378   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.464533   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.464742   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.464754   80762 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:23.576211   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228303.539451044
	
	I0612 21:38:23.576231   80762 fix.go:216] guest clock: 1718228303.539451044
	I0612 21:38:23.576239   80762 fix.go:229] Guest: 2024-06-12 21:38:23.539451044 +0000 UTC Remote: 2024-06-12 21:38:23.460665921 +0000 UTC m=+270.637213069 (delta=78.785123ms)
	I0612 21:38:23.576285   80762 fix.go:200] guest clock delta is within tolerance: 78.785123ms
	I0612 21:38:23.576291   80762 start.go:83] releasing machines lock for "old-k8s-version-983302", held for 18.65227368s
	I0612 21:38:23.576316   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.576617   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:23.579493   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.579881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.579913   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.580120   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580693   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580865   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580952   80762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:23.581005   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.581111   80762 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:23.581141   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.584053   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584262   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584443   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584479   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584587   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584690   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584757   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.584855   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584918   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.584980   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.585067   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.585115   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.585227   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.666055   80762 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:23.688409   80762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:23.848030   80762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:23.855302   80762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:23.855383   80762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:23.874362   80762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:23.874389   80762 start.go:494] detecting cgroup driver to use...
	I0612 21:38:23.874461   80762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:23.893239   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:23.909774   80762 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:23.909844   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:23.926084   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:23.943341   80762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:24.072731   80762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:24.244551   80762 docker.go:233] disabling docker service ...
	I0612 21:38:24.244624   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:24.261862   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:24.277051   80762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:24.426146   80762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:24.560634   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:24.575339   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:24.595965   80762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0612 21:38:24.596043   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.607814   80762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:24.607892   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.619001   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.630982   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.644326   80762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:24.658640   80762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:24.673944   80762 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:24.673994   80762 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:24.693853   80762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:24.709251   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:24.856222   80762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:25.023760   80762 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:25.023842   80762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:25.029449   80762 start.go:562] Will wait 60s for crictl version
	I0612 21:38:25.029522   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:25.033750   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:25.080911   80762 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:25.081018   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.111727   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.145999   80762 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0612 21:38:22.512748   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:24.515486   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.602119   80157 main.go:141] libmachine: (no-preload-087875) Calling .Start
	I0612 21:38:23.602319   80157 main.go:141] libmachine: (no-preload-087875) Ensuring networks are active...
	I0612 21:38:23.603167   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network default is active
	I0612 21:38:23.603533   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network mk-no-preload-087875 is active
	I0612 21:38:23.603887   80157 main.go:141] libmachine: (no-preload-087875) Getting domain xml...
	I0612 21:38:23.604617   80157 main.go:141] libmachine: (no-preload-087875) Creating domain...
	I0612 21:38:24.978550   80157 main.go:141] libmachine: (no-preload-087875) Waiting to get IP...
	I0612 21:38:24.979551   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:24.979945   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:24.980007   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:24.979925   81986 retry.go:31] will retry after 224.557195ms: waiting for machine to come up
	I0612 21:38:25.206441   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.206928   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.206957   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.206875   81986 retry.go:31] will retry after 361.682908ms: waiting for machine to come up
	I0612 21:38:25.570564   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.571139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.571184   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.571089   81986 retry.go:31] will retry after 328.335873ms: waiting for machine to come up
	I0612 21:38:25.901471   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.902020   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.902054   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.901953   81986 retry.go:31] will retry after 505.408325ms: waiting for machine to come up
	I0612 21:38:26.408636   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:26.409139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:26.409167   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:26.409091   81986 retry.go:31] will retry after 749.519426ms: waiting for machine to come up
	I0612 21:38:27.160100   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.160563   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.160611   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.160537   81986 retry.go:31] will retry after 641.037463ms: waiting for machine to come up
	I0612 21:38:25.147420   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:25.151029   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151402   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:25.151432   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151726   80762 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:25.156561   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:25.171243   80762 kubeadm.go:877] updating cluster {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:25.171386   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:38:25.171429   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:25.225872   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:25.225936   80762 ssh_runner.go:195] Run: which lz4
	I0612 21:38:25.230447   80762 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:25.235452   80762 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:25.235485   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0612 21:38:27.033962   80762 crio.go:462] duration metric: took 1.803565745s to copy over tarball
	I0612 21:38:27.034045   80762 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:25.149629   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.651785   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:26.516743   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:29.013751   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.803722   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.804278   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.804316   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.804252   81986 retry.go:31] will retry after 1.184505978s: waiting for machine to come up
	I0612 21:38:28.990221   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:28.990736   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:28.990763   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:28.990709   81986 retry.go:31] will retry after 1.061139219s: waiting for machine to come up
	I0612 21:38:30.054187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:30.054768   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:30.054805   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:30.054718   81986 retry.go:31] will retry after 1.621121981s: waiting for machine to come up
	I0612 21:38:31.677355   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:31.677938   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:31.677966   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:31.677890   81986 retry.go:31] will retry after 2.17746309s: waiting for machine to come up
	I0612 21:38:30.212028   80762 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177947965s)
	I0612 21:38:30.212073   80762 crio.go:469] duration metric: took 3.178080815s to extract the tarball
	I0612 21:38:30.212085   80762 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:30.256957   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:30.297891   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:30.297917   80762 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:30.298025   80762 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.298045   80762 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.298055   80762 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.298021   80762 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0612 21:38:30.298106   80762 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.298062   80762 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.298004   80762 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.298079   80762 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0612 21:38:30.299842   80762 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.299848   80762 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.299843   80762 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.299866   80762 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.299876   80762 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.299905   80762 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.466739   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0612 21:38:30.516078   80762 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0612 21:38:30.516127   80762 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0612 21:38:30.516174   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.520362   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0612 21:38:30.545437   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.563320   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0612 21:38:30.599110   80762 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0612 21:38:30.599155   80762 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.599217   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.603578   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.639450   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0612 21:38:30.649462   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.650602   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.652555   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.656970   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.672136   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.766185   80762 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0612 21:38:30.766233   80762 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.766279   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.778901   80762 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0612 21:38:30.778946   80762 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.778952   80762 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0612 21:38:30.778983   80762 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.778994   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.779041   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.793610   80762 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0612 21:38:30.793650   80762 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.793698   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.807451   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.807482   80762 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0612 21:38:30.807518   80762 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.807458   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.807518   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.807557   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.807559   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.916470   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0612 21:38:30.916564   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0612 21:38:30.916576   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0612 21:38:30.916603   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0612 21:38:30.916646   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.953152   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0612 21:38:31.194046   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:31.341827   80762 cache_images.go:92] duration metric: took 1.043891497s to LoadCachedImages
	W0612 21:38:31.341922   80762 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0612 21:38:31.341937   80762 kubeadm.go:928] updating node { 192.168.50.81 8443 v1.20.0 crio true true} ...
	I0612 21:38:31.342064   80762 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-983302 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:31.342154   80762 ssh_runner.go:195] Run: crio config
	I0612 21:38:31.395673   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:38:31.395706   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:31.395722   80762 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:31.395744   80762 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-983302 NodeName:old-k8s-version-983302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0612 21:38:31.395918   80762 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-983302"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:31.395995   80762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0612 21:38:31.410706   80762 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:31.410785   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:31.425161   80762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0612 21:38:31.445883   80762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:31.463605   80762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0612 21:38:31.482797   80762 ssh_runner.go:195] Run: grep 192.168.50.81	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:31.486974   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:31.499681   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:31.645490   80762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:31.668769   80762 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302 for IP: 192.168.50.81
	I0612 21:38:31.668797   80762 certs.go:194] generating shared ca certs ...
	I0612 21:38:31.668820   80762 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:31.668987   80762 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:31.669061   80762 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:31.669088   80762 certs.go:256] generating profile certs ...
	I0612 21:38:31.669212   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.key
	I0612 21:38:31.669309   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key.1098c83c
	I0612 21:38:31.669373   80762 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key
	I0612 21:38:31.669548   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:31.669598   80762 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:31.669613   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:31.669662   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:31.669723   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:31.669759   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:31.669830   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:31.670835   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:31.717330   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:31.754900   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:31.798099   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:31.839647   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0612 21:38:31.883454   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:31.920765   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:31.953069   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0612 21:38:31.978134   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:32.002475   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:32.027784   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:32.053563   80762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:32.074493   80762 ssh_runner.go:195] Run: openssl version
	I0612 21:38:32.080620   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:32.093531   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098615   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098688   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.104777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:32.116551   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:32.130188   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135197   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135279   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.142777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:32.156051   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:32.169866   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175249   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175340   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.181561   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:32.193430   80762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:32.198235   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:32.204654   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:32.210771   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:32.216966   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:32.223203   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:32.230990   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:32.237290   80762 kubeadm.go:391] StartCluster: {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:32.237446   80762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:32.237503   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.282436   80762 cri.go:89] found id: ""
	I0612 21:38:32.282516   80762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:32.295283   80762 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:32.295313   80762 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:32.295321   80762 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:32.295400   80762 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:32.307483   80762 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:32.308555   80762 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-983302" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:32.309335   80762 kubeconfig.go:62] /home/jenkins/minikube-integration/17779-14199/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-983302" cluster setting kubeconfig missing "old-k8s-version-983302" context setting]
	I0612 21:38:32.310486   80762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:32.397524   80762 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:32.411765   80762 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.81
	I0612 21:38:32.411797   80762 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:32.411807   80762 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:32.411849   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.460009   80762 cri.go:89] found id: ""
	I0612 21:38:32.460078   80762 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:32.481670   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:32.493664   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:32.493684   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:32.493734   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:32.503974   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:32.504044   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:32.515971   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:32.525772   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:32.525832   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:32.537137   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.548539   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:32.548600   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.560401   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:32.570608   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:32.570681   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:32.582763   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:32.594407   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:32.734633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:30.151681   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.658859   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:31.658881   80404 pod_ready.go:81] duration metric: took 12.518130926s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:31.658890   80404 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:33.666360   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.357093   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.857141   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:33.857675   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:33.857702   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:33.857648   81986 retry.go:31] will retry after 2.485654549s: waiting for machine to come up
	I0612 21:38:36.344611   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:36.345117   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:36.345148   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:36.345075   81986 retry.go:31] will retry after 3.560063035s: waiting for machine to come up
	I0612 21:38:33.526337   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.768139   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.896716   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.986708   80762 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:33.986832   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.487194   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.987580   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.486966   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.487534   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.987526   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:37.487035   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.669161   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:35.513787   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.011903   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:39.907588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:39.908051   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:39.908110   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:39.907994   81986 retry.go:31] will retry after 4.524521166s: waiting for machine to come up
	I0612 21:38:37.986904   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.487262   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.986907   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.486895   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.987060   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.487385   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.987049   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.487325   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.987550   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:42.487225   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.665078   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.665731   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.666653   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:40.512741   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.513175   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:45.013451   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.434330   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434850   80157 main.go:141] libmachine: (no-preload-087875) Found IP for machine: 192.168.72.63
	I0612 21:38:44.434883   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has current primary IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434893   80157 main.go:141] libmachine: (no-preload-087875) Reserving static IP address...
	I0612 21:38:44.435324   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.435358   80157 main.go:141] libmachine: (no-preload-087875) Reserved static IP address: 192.168.72.63
	I0612 21:38:44.435378   80157 main.go:141] libmachine: (no-preload-087875) DBG | skip adding static IP to network mk-no-preload-087875 - found existing host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"}
	I0612 21:38:44.435388   80157 main.go:141] libmachine: (no-preload-087875) Waiting for SSH to be available...
	I0612 21:38:44.435397   80157 main.go:141] libmachine: (no-preload-087875) DBG | Getting to WaitForSSH function...
	I0612 21:38:44.437881   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438196   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.438218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438385   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH client type: external
	I0612 21:38:44.438414   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa (-rw-------)
	I0612 21:38:44.438452   80157 main.go:141] libmachine: (no-preload-087875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:44.438469   80157 main.go:141] libmachine: (no-preload-087875) DBG | About to run SSH command:
	I0612 21:38:44.438489   80157 main.go:141] libmachine: (no-preload-087875) DBG | exit 0
	I0612 21:38:44.571149   80157 main.go:141] libmachine: (no-preload-087875) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:44.571499   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetConfigRaw
	I0612 21:38:44.572172   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.574754   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575142   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.575187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575406   80157 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/config.json ...
	I0612 21:38:44.575580   80157 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:44.575595   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:44.575825   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.578584   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579008   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.579030   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579214   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.579394   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579534   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.579924   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.580096   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.580109   80157 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:44.691573   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:44.691609   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.691890   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:38:44.691914   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.692120   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.695218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695697   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.695729   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695783   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.695986   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696200   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696383   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.696572   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.696776   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.696794   80157 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-087875 && echo "no-preload-087875" | sudo tee /etc/hostname
	I0612 21:38:44.821857   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-087875
	
	I0612 21:38:44.821893   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.824821   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825263   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.825295   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825523   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.825740   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.825912   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.826024   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.826187   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.826406   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.826430   80157 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-087875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-087875/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-087875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:44.948871   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:44.948904   80157 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:44.948930   80157 buildroot.go:174] setting up certificates
	I0612 21:38:44.948941   80157 provision.go:84] configureAuth start
	I0612 21:38:44.948954   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.949247   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.952166   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952511   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.952538   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952662   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.955149   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955483   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.955505   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955658   80157 provision.go:143] copyHostCerts
	I0612 21:38:44.955731   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:44.955743   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:44.955807   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:44.955929   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:44.955942   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:44.955975   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:44.956052   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:44.956059   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:44.956078   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:44.956125   80157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.no-preload-087875 san=[127.0.0.1 192.168.72.63 localhost minikube no-preload-087875]
	I0612 21:38:45.138701   80157 provision.go:177] copyRemoteCerts
	I0612 21:38:45.138758   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:45.138781   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.141540   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142011   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.142055   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142199   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.142457   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.142603   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.142765   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.234480   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:45.259043   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0612 21:38:45.290511   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:45.316377   80157 provision.go:87] duration metric: took 367.423709ms to configureAuth
	I0612 21:38:45.316403   80157 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:45.316607   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:45.316684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.319596   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320160   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.320187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320384   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.320598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320778   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320973   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.321203   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.321368   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.321387   80157 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:45.611478   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:45.611511   80157 machine.go:97] duration metric: took 1.035919707s to provisionDockerMachine
	I0612 21:38:45.611523   80157 start.go:293] postStartSetup for "no-preload-087875" (driver="kvm2")
	I0612 21:38:45.611533   80157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:45.611556   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.611843   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:45.611862   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.615071   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615542   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.615582   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615715   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.615889   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.616028   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.616204   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.707710   80157 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:45.712155   80157 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:45.712177   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:45.712235   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:45.712301   80157 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:45.712386   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:45.722654   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:45.747626   80157 start.go:296] duration metric: took 136.091584ms for postStartSetup
	I0612 21:38:45.747666   80157 fix.go:56] duration metric: took 22.171227252s for fixHost
	I0612 21:38:45.747685   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.750588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.750972   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.750999   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.751231   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.751443   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751773   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.752005   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.752181   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.752195   80157 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:45.864042   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228325.837473906
	
	I0612 21:38:45.864068   80157 fix.go:216] guest clock: 1718228325.837473906
	I0612 21:38:45.864079   80157 fix.go:229] Guest: 2024-06-12 21:38:45.837473906 +0000 UTC Remote: 2024-06-12 21:38:45.747669277 +0000 UTC m=+358.493088442 (delta=89.804629ms)
	I0612 21:38:45.864106   80157 fix.go:200] guest clock delta is within tolerance: 89.804629ms
	I0612 21:38:45.864114   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 22.287706082s
	I0612 21:38:45.864152   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.864448   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:45.867230   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867603   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.867633   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867768   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868293   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868453   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868535   80157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:45.868575   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.868663   80157 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:45.868681   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.871218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871489   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871678   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.871719   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871915   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872061   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.872085   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872109   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.872240   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872246   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872522   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872529   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.872692   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872868   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.953249   80157 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:45.976778   80157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:46.124511   80157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:46.130509   80157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:46.130575   80157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:46.149670   80157 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:46.149691   80157 start.go:494] detecting cgroup driver to use...
	I0612 21:38:46.149755   80157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:46.167865   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:46.182896   80157 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:46.182951   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:46.197058   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:46.211517   80157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:46.331986   80157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:46.500675   80157 docker.go:233] disabling docker service ...
	I0612 21:38:46.500745   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:46.516858   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:46.530617   80157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:46.674917   80157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:46.810090   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:46.825079   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:46.843895   80157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:46.843963   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.854170   80157 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:46.854245   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.864699   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.875057   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.886063   80157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:46.897688   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.908984   80157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.926803   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.939373   80157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:46.948868   80157 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:46.948922   80157 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:46.963593   80157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:46.973735   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:47.108669   80157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:47.249938   80157 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:47.250044   80157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:47.255480   80157 start.go:562] Will wait 60s for crictl version
	I0612 21:38:47.255556   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.259730   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:47.303074   80157 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:47.303187   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.332225   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.363628   80157 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:42.987579   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.487465   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.987265   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.487935   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.987399   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.487793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.986898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.486985   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.986848   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.486947   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.164573   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.165711   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.512195   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.512366   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.365068   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:47.367703   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368079   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:47.368103   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368325   80157 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:47.372608   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:47.386411   80157 kubeadm.go:877] updating cluster {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:47.386750   80157 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:47.386796   80157 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:47.422165   80157 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:47.422189   80157 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:47.422227   80157 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.422280   80157 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.422355   80157 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.422370   80157 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.422311   80157 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.422347   80157 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.422318   80157 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0612 21:38:47.422599   80157 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.423599   80157 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0612 21:38:47.423610   80157 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.423612   80157 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.423630   80157 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.423626   80157 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.423699   80157 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.423737   80157 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.423720   80157 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.556807   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0612 21:38:47.557424   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.561887   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.569402   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.571880   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.576879   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.587848   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.759890   80157 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0612 21:38:47.759926   80157 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.759947   80157 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0612 21:38:47.759973   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.759976   80157 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0612 21:38:47.760006   80157 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.760015   80157 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0612 21:38:47.759977   80157 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.760061   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760063   80157 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.760075   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760073   80157 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0612 21:38:47.760091   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760101   80157 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.760164   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.766878   80157 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0612 21:38:47.766905   80157 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.766943   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.777168   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.777197   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.778459   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.779057   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.882668   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0612 21:38:47.882770   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.902416   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0612 21:38:47.902532   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:47.917388   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0612 21:38:47.917473   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0612 21:38:47.917501   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:47.917528   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:47.917545   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:47.917500   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0612 21:38:47.917558   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917594   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917502   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:47.917559   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0612 21:38:47.929251   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0612 21:38:47.929299   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0612 21:38:47.929308   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0612 21:38:48.312589   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.713720   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1: (2.796151375s)
	I0612 21:38:50.713767   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.796263274s)
	I0612 21:38:50.713901   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.401254109s)
	I0612 21:38:50.713921   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.713966   80157 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0612 21:38:50.713987   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.714017   80157 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.714063   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.987863   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.487299   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.986886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.486972   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.987859   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.487034   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.987724   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.486948   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:52.487668   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.665638   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.665855   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:51.512765   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:54.011870   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.169682   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.455668553s)
	I0612 21:38:53.169705   80157 ssh_runner.go:235] Completed: which crictl: (2.455619981s)
	I0612 21:38:53.169714   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0612 21:38:53.169741   80157 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.169759   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:53.169784   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.216895   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0612 21:38:53.217020   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:38:57.220343   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.050521066s)
	I0612 21:38:57.220376   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0612 21:38:57.220397   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220444   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220443   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (4.003396955s)
	I0612 21:38:57.220487   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0612 21:38:52.987635   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.487500   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.987860   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.487855   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.986868   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.487259   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.987902   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.487535   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.987269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:57.487542   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.166299   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.665085   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:56.012847   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.557142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.682288   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.46182102s)
	I0612 21:38:58.682313   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0612 21:38:58.682337   80157 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:58.682376   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:39:00.576373   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.893964365s)
	I0612 21:39:00.576412   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0612 21:39:00.576443   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:39:00.576504   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:57.987222   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.486976   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.986913   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.487269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.987289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.987690   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.487283   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.987541   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:02.487589   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.667732   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.165317   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:01.012684   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.015111   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:02.445930   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.86940281s)
	I0612 21:39:02.445960   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0612 21:39:02.445994   80157 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:02.446071   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:03.393330   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0612 21:39:03.393375   80157 cache_images.go:123] Successfully loaded all cached images
	I0612 21:39:03.393382   80157 cache_images.go:92] duration metric: took 15.9711807s to LoadCachedImages
	I0612 21:39:03.393397   80157 kubeadm.go:928] updating node { 192.168.72.63 8443 v1.30.1 crio true true} ...
	I0612 21:39:03.393543   80157 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-087875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:39:03.393658   80157 ssh_runner.go:195] Run: crio config
	I0612 21:39:03.448859   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:03.448884   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:03.448901   80157 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:39:03.448930   80157 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.63 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-087875 NodeName:no-preload-087875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:39:03.449103   80157 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-087875"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:39:03.449181   80157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:39:03.462756   80157 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:39:03.462825   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:39:03.472653   80157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0612 21:39:03.491567   80157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:39:03.509239   80157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0612 21:39:03.527802   80157 ssh_runner.go:195] Run: grep 192.168.72.63	control-plane.minikube.internal$ /etc/hosts
	I0612 21:39:03.531523   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:39:03.543748   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:39:03.666376   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:39:03.683563   80157 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875 for IP: 192.168.72.63
	I0612 21:39:03.683587   80157 certs.go:194] generating shared ca certs ...
	I0612 21:39:03.683606   80157 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:39:03.683766   80157 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:39:03.683816   80157 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:39:03.683831   80157 certs.go:256] generating profile certs ...
	I0612 21:39:03.683927   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/client.key
	I0612 21:39:03.684010   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key.13709275
	I0612 21:39:03.684066   80157 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key
	I0612 21:39:03.684217   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:39:03.684259   80157 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:39:03.684272   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:39:03.684318   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:39:03.684364   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:39:03.684395   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:39:03.684455   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:39:03.685098   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:39:03.732817   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:39:03.771449   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:39:03.800774   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:39:03.831845   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 21:39:03.862000   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 21:39:03.901036   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:39:03.925025   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:39:03.950862   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:39:03.974222   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:39:04.002698   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:39:04.028173   80157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:39:04.044685   80157 ssh_runner.go:195] Run: openssl version
	I0612 21:39:04.050600   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:39:04.061893   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066371   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066424   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.072463   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:39:04.083929   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:39:04.094777   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099380   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099435   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.105125   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:39:04.116191   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:39:04.127408   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132234   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132315   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.138401   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:39:04.149542   80157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:39:04.154133   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:39:04.160171   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:39:04.166410   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:39:04.172650   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:39:04.178506   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:39:04.184375   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:39:04.190412   80157 kubeadm.go:391] StartCluster: {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:39:04.190524   80157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:39:04.190584   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.235297   80157 cri.go:89] found id: ""
	I0612 21:39:04.235362   80157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:39:04.246400   80157 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:39:04.246429   80157 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:39:04.246449   80157 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:39:04.246499   80157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:39:04.257137   80157 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:39:04.258277   80157 kubeconfig.go:125] found "no-preload-087875" server: "https://192.168.72.63:8443"
	I0612 21:39:04.260656   80157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:39:04.270637   80157 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.63
	I0612 21:39:04.270666   80157 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:39:04.270675   80157 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:39:04.270730   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.316487   80157 cri.go:89] found id: ""
	I0612 21:39:04.316550   80157 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:39:04.334814   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:39:04.346430   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:39:04.346451   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:39:04.346500   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:39:04.356362   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:39:04.356417   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:39:04.366999   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:39:04.378005   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:39:04.378061   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:39:04.388052   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.397130   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:39:04.397185   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.407053   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:39:04.416338   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:39:04.416395   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:39:04.426475   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:39:04.436852   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:04.565452   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.461610   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.676493   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.767236   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.870855   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:39:05.870960   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.372034   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.871680   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.906242   80157 api_server.go:72] duration metric: took 1.035387498s to wait for apiserver process to appear ...
	I0612 21:39:06.906273   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:39:06.906296   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:06.906883   80157 api_server.go:269] stopped: https://192.168.72.63:8443/healthz: Get "https://192.168.72.63:8443/healthz": dial tcp 192.168.72.63:8443: connect: connection refused
	I0612 21:39:02.987853   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.487382   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.987303   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.487852   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.987464   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.486928   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.987660   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:07.487497   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.166502   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.665452   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:09.665766   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:05.512792   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:08.012392   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:10.014073   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.407227   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.589285   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:39:09.589319   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:39:09.589336   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.726716   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.726753   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:09.907032   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.917718   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.917746   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.406997   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.412127   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:10.412156   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.906700   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.911262   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:39:10.918778   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:39:10.918813   80157 api_server.go:131] duration metric: took 4.012531107s to wait for apiserver health ...
	I0612 21:39:10.918824   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:10.918832   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:10.921012   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:39:10.922401   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:39:10.948209   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:39:10.974530   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:39:10.986054   80157 system_pods.go:59] 8 kube-system pods found
	I0612 21:39:10.986091   80157 system_pods.go:61] "coredns-7db6d8ff4d-sh68b" [17691219-bfda-443b-8049-e6e966aadb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:39:10.986102   80157 system_pods.go:61] "etcd-no-preload-087875" [3048b12a-4354-45fd-99c7-d2a84035e102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:39:10.986114   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [0f39a5fd-1a64-479f-bb28-c19bc10b7ed3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:39:10.986127   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [62cc49b8-b05f-4371-aa17-bea17d08d2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:39:10.986141   80157 system_pods.go:61] "kube-proxy-htv9h" [e3eb4693-7896-4dd2-98b8-91f06b028a1e] Running
	I0612 21:39:10.986158   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [ef833b9d-75ca-43bd-b196-30594775b174] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:39:10.986170   80157 system_pods.go:61] "metrics-server-569cc877fc-d5mj6" [79ba2aad-c942-4162-b69a-5c7dd138a618] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:39:10.986178   80157 system_pods.go:61] "storage-provisioner" [5793c778-1a5c-4cfe-924a-b85b72df53cd] Running
	I0612 21:39:10.986187   80157 system_pods.go:74] duration metric: took 11.634011ms to wait for pod list to return data ...
	I0612 21:39:10.986199   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:39:10.992801   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:39:10.992843   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:39:10.992856   80157 node_conditions.go:105] duration metric: took 6.648025ms to run NodePressure ...
	I0612 21:39:10.992878   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:11.263413   80157 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271758   80157 kubeadm.go:733] kubelet initialised
	I0612 21:39:11.271781   80157 kubeadm.go:734] duration metric: took 8.347232ms waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271789   80157 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:39:11.277940   80157 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:07.987732   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.486974   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.486941   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.986929   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.487754   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.987685   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.486910   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.987457   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.486873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.165604   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.166986   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.029928   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.512085   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:13.287555   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:15.786345   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.987394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.486915   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.987880   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.486881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.986951   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.487462   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.986850   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.487213   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.987066   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:17.487882   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.666123   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.666354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:16.512936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:19.013463   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.285110   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:20.788396   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.284869   80157 pod_ready.go:92] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:21.284902   80157 pod_ready.go:81] duration metric: took 10.006929439s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:21.284916   80157 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:17.987273   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.987836   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.487622   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.987381   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.487005   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.987638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.487670   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.987552   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:22.487438   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.665272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.512836   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:24.014108   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.291502   80157 pod_ready.go:102] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:25.791813   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.791842   80157 pod_ready.go:81] duration metric: took 4.506916362s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.791854   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796901   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.796928   80157 pod_ready.go:81] duration metric: took 5.066599ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796939   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801550   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.801571   80157 pod_ready.go:81] duration metric: took 4.624771ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801580   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806178   80157 pod_ready.go:92] pod "kube-proxy-htv9h" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.806195   80157 pod_ready.go:81] duration metric: took 4.609956ms for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806204   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809883   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.809902   80157 pod_ready.go:81] duration metric: took 3.691999ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809914   80157 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:22.987165   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.487122   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.987804   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.487583   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.987647   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.487126   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.987251   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.987044   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:27.486911   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.668272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:28.164809   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:26.513220   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:29.013047   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.817352   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:30.315600   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.487496   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.987166   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.487892   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.987787   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.487315   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.987933   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.487255   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:32.487881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.165900   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.167795   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.665939   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:31.013473   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:33.015281   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.316680   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.317063   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:36.816905   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.987267   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.487678   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.987296   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:33.987371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:34.028670   80762 cri.go:89] found id: ""
	I0612 21:39:34.028699   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.028710   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:34.028717   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:34.028778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:34.068371   80762 cri.go:89] found id: ""
	I0612 21:39:34.068400   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.068412   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:34.068419   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:34.068485   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:34.104605   80762 cri.go:89] found id: ""
	I0612 21:39:34.104634   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.104643   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:34.104650   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:34.104745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:34.150301   80762 cri.go:89] found id: ""
	I0612 21:39:34.150327   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.150335   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:34.150341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:34.150396   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:34.191426   80762 cri.go:89] found id: ""
	I0612 21:39:34.191462   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.191475   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:34.191484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:34.191562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:34.228483   80762 cri.go:89] found id: ""
	I0612 21:39:34.228523   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.228535   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:34.228543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:34.228653   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:34.262834   80762 cri.go:89] found id: ""
	I0612 21:39:34.262863   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.262873   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:34.262881   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:34.262944   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:34.298283   80762 cri.go:89] found id: ""
	I0612 21:39:34.298312   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.298321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:34.298330   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:34.298340   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:34.350889   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:34.350918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:34.365264   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:34.365289   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:34.508130   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:34.508162   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:34.508180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:34.572036   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:34.572076   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.114371   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:37.127410   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:37.127492   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:37.168684   80762 cri.go:89] found id: ""
	I0612 21:39:37.168705   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.168714   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:37.168723   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:37.168798   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:37.208765   80762 cri.go:89] found id: ""
	I0612 21:39:37.208797   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.208808   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:37.208815   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:37.208875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:37.266245   80762 cri.go:89] found id: ""
	I0612 21:39:37.266270   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.266277   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:37.266283   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:37.266331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:37.313557   80762 cri.go:89] found id: ""
	I0612 21:39:37.313586   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.313597   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:37.313606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:37.313677   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:37.353292   80762 cri.go:89] found id: ""
	I0612 21:39:37.353318   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.353325   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:37.353332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:37.353389   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:37.391940   80762 cri.go:89] found id: ""
	I0612 21:39:37.391974   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.391984   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:37.392015   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:37.392078   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:37.432133   80762 cri.go:89] found id: ""
	I0612 21:39:37.432154   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.432166   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:37.432174   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:37.432228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:37.468274   80762 cri.go:89] found id: ""
	I0612 21:39:37.468302   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.468310   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:37.468328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:37.468347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:37.543904   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:37.543941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.586957   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:37.586982   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:37.641247   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:37.641288   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:37.657076   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:37.657101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:37.729279   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:37.165427   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.166383   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:35.512174   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:37.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.012806   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.317119   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:41.817268   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.229638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:40.243825   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:40.243889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:40.282795   80762 cri.go:89] found id: ""
	I0612 21:39:40.282821   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.282829   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:40.282834   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:40.282879   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:40.320211   80762 cri.go:89] found id: ""
	I0612 21:39:40.320236   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.320246   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:40.320252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:40.320338   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:40.356270   80762 cri.go:89] found id: ""
	I0612 21:39:40.356292   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.356300   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:40.356306   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:40.356353   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:40.394667   80762 cri.go:89] found id: ""
	I0612 21:39:40.394691   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.394699   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:40.394704   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:40.394751   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:40.432765   80762 cri.go:89] found id: ""
	I0612 21:39:40.432794   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.432804   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:40.432811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:40.432883   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:40.472347   80762 cri.go:89] found id: ""
	I0612 21:39:40.472386   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.472406   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:40.472414   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:40.472477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:40.508414   80762 cri.go:89] found id: ""
	I0612 21:39:40.508445   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.508456   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:40.508464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:40.508521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:40.546938   80762 cri.go:89] found id: ""
	I0612 21:39:40.546964   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.546972   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:40.546981   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:40.546993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:40.621356   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:40.621380   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:40.621398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:40.703830   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:40.703865   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:40.744915   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:40.744965   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:40.798883   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:40.798920   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:41.167469   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.667403   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:42.512351   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.512639   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.317053   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.317350   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.315905   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:43.330150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:43.330221   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:43.377307   80762 cri.go:89] found id: ""
	I0612 21:39:43.377337   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.377347   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:43.377362   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:43.377426   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:43.412608   80762 cri.go:89] found id: ""
	I0612 21:39:43.412638   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.412648   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:43.412654   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:43.412718   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:43.446716   80762 cri.go:89] found id: ""
	I0612 21:39:43.446746   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.446755   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:43.446762   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:43.446823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:43.484607   80762 cri.go:89] found id: ""
	I0612 21:39:43.484636   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.484647   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:43.484655   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:43.484700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:43.522400   80762 cri.go:89] found id: ""
	I0612 21:39:43.522427   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.522438   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:43.522445   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:43.522529   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:43.559121   80762 cri.go:89] found id: ""
	I0612 21:39:43.559147   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.559163   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:43.559211   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:43.559292   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:43.595886   80762 cri.go:89] found id: ""
	I0612 21:39:43.595919   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.595937   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:43.595945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:43.596011   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:43.638549   80762 cri.go:89] found id: ""
	I0612 21:39:43.638573   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.638583   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:43.638594   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:43.638609   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:43.705300   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:43.705338   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:43.723246   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:43.723281   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:43.807735   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:43.807760   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:43.807870   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:43.882971   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:43.883017   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.421476   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:46.434447   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:46.434532   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:46.470710   80762 cri.go:89] found id: ""
	I0612 21:39:46.470745   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.470758   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:46.470765   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:46.470828   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:46.504843   80762 cri.go:89] found id: ""
	I0612 21:39:46.504871   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.504878   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:46.504884   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:46.504941   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:46.542937   80762 cri.go:89] found id: ""
	I0612 21:39:46.542965   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.542973   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:46.542979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:46.543035   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:46.581098   80762 cri.go:89] found id: ""
	I0612 21:39:46.581124   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.581133   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:46.581143   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:46.581189   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:46.617289   80762 cri.go:89] found id: ""
	I0612 21:39:46.617319   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.617329   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:46.617337   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:46.617402   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:46.651012   80762 cri.go:89] found id: ""
	I0612 21:39:46.651045   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.651057   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:46.651070   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:46.651141   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:46.688344   80762 cri.go:89] found id: ""
	I0612 21:39:46.688370   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.688379   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:46.688388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:46.688451   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:46.724349   80762 cri.go:89] found id: ""
	I0612 21:39:46.724374   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.724382   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:46.724390   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:46.724404   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:46.797866   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:46.797894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:46.797912   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:46.887520   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:46.887557   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.928143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:46.928182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:46.981416   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:46.981451   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:46.164845   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.166925   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.513519   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.016041   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.816335   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:50.816407   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.497028   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:49.510077   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:49.510147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:49.544313   80762 cri.go:89] found id: ""
	I0612 21:39:49.544349   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.544359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:49.544365   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:49.544416   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:49.580220   80762 cri.go:89] found id: ""
	I0612 21:39:49.580248   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.580256   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:49.580262   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:49.580316   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:49.619582   80762 cri.go:89] found id: ""
	I0612 21:39:49.619607   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.619615   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:49.619620   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:49.619692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:49.656453   80762 cri.go:89] found id: ""
	I0612 21:39:49.656479   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.656487   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:49.656493   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:49.656557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:49.694285   80762 cri.go:89] found id: ""
	I0612 21:39:49.694318   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.694330   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:49.694338   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:49.694417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:49.731100   80762 cri.go:89] found id: ""
	I0612 21:39:49.731127   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.731135   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:49.731140   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:49.731209   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:49.767709   80762 cri.go:89] found id: ""
	I0612 21:39:49.767731   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.767738   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:49.767744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:49.767787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:49.801231   80762 cri.go:89] found id: ""
	I0612 21:39:49.801265   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.801283   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:49.801294   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:49.801309   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:49.848500   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:49.848542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:49.900084   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:49.900121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:49.916208   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:49.916234   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:49.983283   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:49.983310   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:49.983325   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:52.566884   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:52.580400   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:52.580476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:52.615922   80762 cri.go:89] found id: ""
	I0612 21:39:52.615957   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.615970   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:52.615978   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:52.616038   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:52.657316   80762 cri.go:89] found id: ""
	I0612 21:39:52.657348   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.657356   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:52.657362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:52.657417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:52.692426   80762 cri.go:89] found id: ""
	I0612 21:39:52.692459   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.692470   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:52.692478   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:52.692542   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:52.726800   80762 cri.go:89] found id: ""
	I0612 21:39:52.726835   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.726848   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:52.726856   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:52.726921   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:52.764283   80762 cri.go:89] found id: ""
	I0612 21:39:52.764314   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.764326   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:52.764341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:52.764395   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:52.802279   80762 cri.go:89] found id: ""
	I0612 21:39:52.802311   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.802324   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:52.802331   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:52.802385   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:52.841433   80762 cri.go:89] found id: ""
	I0612 21:39:52.841466   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.841477   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:52.841484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:52.841546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:50.667322   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.165294   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:51.016137   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.019373   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.818876   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.316845   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.881417   80762 cri.go:89] found id: ""
	I0612 21:39:52.881441   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.881449   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:52.881457   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:52.881468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:52.936228   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:52.936262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:52.950688   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:52.950718   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:53.025101   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:53.025122   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:53.025138   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:53.114986   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:53.115031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:55.653893   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:55.668983   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:55.669047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:55.708445   80762 cri.go:89] found id: ""
	I0612 21:39:55.708475   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.708486   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:55.708494   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:55.708558   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:55.745158   80762 cri.go:89] found id: ""
	I0612 21:39:55.745185   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.745195   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:55.745204   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:55.745270   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:55.785322   80762 cri.go:89] found id: ""
	I0612 21:39:55.785344   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.785363   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:55.785370   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:55.785442   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:55.822371   80762 cri.go:89] found id: ""
	I0612 21:39:55.822397   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.822408   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:55.822416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:55.822484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:55.856866   80762 cri.go:89] found id: ""
	I0612 21:39:55.856888   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.856895   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:55.856900   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:55.856954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:55.891618   80762 cri.go:89] found id: ""
	I0612 21:39:55.891648   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.891660   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:55.891668   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:55.891731   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:55.927483   80762 cri.go:89] found id: ""
	I0612 21:39:55.927504   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.927513   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:55.927519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:55.927572   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:55.963546   80762 cri.go:89] found id: ""
	I0612 21:39:55.963572   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.963584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:55.963597   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:55.963616   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:56.037421   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:56.037442   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:56.037453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:56.112148   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:56.112185   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:56.163359   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:56.163389   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:56.217109   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:56.217144   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:55.166499   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.665517   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.665625   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.513267   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.015558   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.317149   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.320306   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:01.815855   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.733278   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:58.746890   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:58.746951   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:58.785222   80762 cri.go:89] found id: ""
	I0612 21:39:58.785252   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.785263   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:58.785269   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:58.785343   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:58.824421   80762 cri.go:89] found id: ""
	I0612 21:39:58.824448   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.824455   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:58.824461   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:58.824521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:58.863626   80762 cri.go:89] found id: ""
	I0612 21:39:58.863658   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.863669   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:58.863728   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:58.863818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:58.904040   80762 cri.go:89] found id: ""
	I0612 21:39:58.904064   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.904073   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:58.904080   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:58.904147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:58.937508   80762 cri.go:89] found id: ""
	I0612 21:39:58.937543   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.937557   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:58.937565   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:58.937632   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:58.974283   80762 cri.go:89] found id: ""
	I0612 21:39:58.974311   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.974322   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:58.974330   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:58.974383   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:59.009954   80762 cri.go:89] found id: ""
	I0612 21:39:59.009987   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.009999   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:59.010007   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:59.010072   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:59.051911   80762 cri.go:89] found id: ""
	I0612 21:39:59.051935   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.051943   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:59.051951   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:59.051961   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:59.102911   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:59.102942   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:59.116576   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:59.116608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:59.189590   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:59.189619   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:59.189634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:59.270192   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:59.270232   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.820872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:01.834916   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:01.835000   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:01.870526   80762 cri.go:89] found id: ""
	I0612 21:40:01.870560   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.870572   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:01.870579   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:01.870642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:01.909581   80762 cri.go:89] found id: ""
	I0612 21:40:01.909614   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.909626   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:01.909633   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:01.909727   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:01.947944   80762 cri.go:89] found id: ""
	I0612 21:40:01.947976   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.947988   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:01.947995   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:01.948059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:01.985745   80762 cri.go:89] found id: ""
	I0612 21:40:01.985781   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.985793   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:01.985800   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:01.985860   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:02.023716   80762 cri.go:89] found id: ""
	I0612 21:40:02.023741   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.023749   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:02.023754   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:02.023801   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:02.059136   80762 cri.go:89] found id: ""
	I0612 21:40:02.059168   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.059203   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:02.059212   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:02.059283   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:02.104520   80762 cri.go:89] found id: ""
	I0612 21:40:02.104544   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.104552   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:02.104558   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:02.104618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:02.146130   80762 cri.go:89] found id: ""
	I0612 21:40:02.146164   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.146176   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:02.146187   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:02.146202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:02.199672   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:02.199710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:02.215224   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:02.215256   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:02.290030   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:02.290057   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:02.290072   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:02.374579   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:02.374615   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.667390   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.165253   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:00.512229   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:02.513298   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.018848   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:03.816610   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.818990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.915345   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:04.928323   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:04.928404   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:04.963267   80762 cri.go:89] found id: ""
	I0612 21:40:04.963297   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.963310   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:04.963319   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:04.963386   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:04.998378   80762 cri.go:89] found id: ""
	I0612 21:40:04.998409   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.998420   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:04.998426   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:04.998498   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:05.038094   80762 cri.go:89] found id: ""
	I0612 21:40:05.038118   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.038126   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:05.038132   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:05.038181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:05.074331   80762 cri.go:89] found id: ""
	I0612 21:40:05.074366   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.074379   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:05.074386   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:05.074462   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:05.109332   80762 cri.go:89] found id: ""
	I0612 21:40:05.109359   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.109368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:05.109373   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:05.109423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:05.143875   80762 cri.go:89] found id: ""
	I0612 21:40:05.143908   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.143918   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:05.143926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:05.143990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:05.183695   80762 cri.go:89] found id: ""
	I0612 21:40:05.183724   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.183731   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:05.183737   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:05.183792   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:05.222852   80762 cri.go:89] found id: ""
	I0612 21:40:05.222878   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.222887   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:05.222895   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:05.222907   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:05.262661   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:05.262687   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:05.315563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:05.315593   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:05.332128   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:05.332163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:05.411675   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:05.411699   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:05.411712   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:06.665324   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.667163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.512587   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.012843   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.316990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.816093   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.991930   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:08.005743   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:08.005807   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:08.041685   80762 cri.go:89] found id: ""
	I0612 21:40:08.041714   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.041724   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:08.041732   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:08.041791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:08.080875   80762 cri.go:89] found id: ""
	I0612 21:40:08.080905   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.080916   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:08.080925   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:08.080993   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:08.117290   80762 cri.go:89] found id: ""
	I0612 21:40:08.117316   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.117323   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:08.117329   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:08.117387   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:08.154345   80762 cri.go:89] found id: ""
	I0612 21:40:08.154376   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.154387   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:08.154395   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:08.154459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:08.192913   80762 cri.go:89] found id: ""
	I0612 21:40:08.192947   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.192957   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:08.192969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:08.193033   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:08.235732   80762 cri.go:89] found id: ""
	I0612 21:40:08.235764   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.235775   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:08.235782   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:08.235853   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:08.274282   80762 cri.go:89] found id: ""
	I0612 21:40:08.274306   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.274314   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:08.274320   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:08.274366   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:08.314585   80762 cri.go:89] found id: ""
	I0612 21:40:08.314608   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.314619   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:08.314628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:08.314641   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:08.331693   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:08.331725   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:08.414541   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:08.414565   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:08.414584   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:08.496428   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:08.496460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:08.546991   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:08.547020   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.099778   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:11.113450   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:11.113539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:11.150426   80762 cri.go:89] found id: ""
	I0612 21:40:11.150451   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.150459   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:11.150464   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:11.150524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:11.189931   80762 cri.go:89] found id: ""
	I0612 21:40:11.189958   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.189967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:11.189972   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:11.190031   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:11.228116   80762 cri.go:89] found id: ""
	I0612 21:40:11.228144   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.228154   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:11.228161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:11.228243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:11.268639   80762 cri.go:89] found id: ""
	I0612 21:40:11.268664   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.268672   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:11.268678   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:11.268723   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:11.306077   80762 cri.go:89] found id: ""
	I0612 21:40:11.306105   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.306116   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:11.306123   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:11.306187   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:11.344360   80762 cri.go:89] found id: ""
	I0612 21:40:11.344388   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.344399   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:11.344418   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:11.344475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:11.382906   80762 cri.go:89] found id: ""
	I0612 21:40:11.382937   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.382948   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:11.382957   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:11.383027   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:11.418388   80762 cri.go:89] found id: ""
	I0612 21:40:11.418419   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.418429   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:11.418439   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:11.418453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:11.432204   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:11.432241   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:11.508219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:11.508251   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:11.508263   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:11.593021   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:11.593058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:11.634056   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:11.634087   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.165384   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:13.170153   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.013303   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.013454   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.817129   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:15.316929   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.187831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:14.203153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:14.203248   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:14.239693   80762 cri.go:89] found id: ""
	I0612 21:40:14.239716   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.239723   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:14.239729   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:14.239827   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:14.273206   80762 cri.go:89] found id: ""
	I0612 21:40:14.273234   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.273244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:14.273251   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:14.273313   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:14.315512   80762 cri.go:89] found id: ""
	I0612 21:40:14.315592   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.315610   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:14.315618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:14.315679   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:14.352454   80762 cri.go:89] found id: ""
	I0612 21:40:14.352483   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.352496   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:14.352504   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:14.352554   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:14.387845   80762 cri.go:89] found id: ""
	I0612 21:40:14.387872   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.387880   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:14.387886   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:14.387935   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:14.423220   80762 cri.go:89] found id: ""
	I0612 21:40:14.423245   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.423254   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:14.423259   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:14.423322   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:14.457744   80762 cri.go:89] found id: ""
	I0612 21:40:14.457772   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.457784   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:14.457791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:14.457849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:14.493580   80762 cri.go:89] found id: ""
	I0612 21:40:14.493611   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.493622   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:14.493633   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:14.493669   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:14.566867   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:14.566894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:14.566913   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:14.645916   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:14.645959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:14.690232   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:14.690262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:14.741532   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:14.741576   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.257886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:17.271841   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:17.271910   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:17.309628   80762 cri.go:89] found id: ""
	I0612 21:40:17.309654   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.309667   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:17.309675   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:17.309746   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:17.346671   80762 cri.go:89] found id: ""
	I0612 21:40:17.346752   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.346769   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:17.346777   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:17.346842   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:17.381145   80762 cri.go:89] found id: ""
	I0612 21:40:17.381169   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.381177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:17.381184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:17.381241   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:17.417159   80762 cri.go:89] found id: ""
	I0612 21:40:17.417179   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.417187   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:17.417194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:17.417254   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:17.453189   80762 cri.go:89] found id: ""
	I0612 21:40:17.453213   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.453220   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:17.453226   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:17.453284   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:17.510988   80762 cri.go:89] found id: ""
	I0612 21:40:17.511012   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.511019   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:17.511026   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:17.511083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:17.548141   80762 cri.go:89] found id: ""
	I0612 21:40:17.548166   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.548176   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:17.548182   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:17.548243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:17.584591   80762 cri.go:89] found id: ""
	I0612 21:40:17.584619   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.584627   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:17.584637   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:17.584647   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:17.628627   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:17.628662   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:17.682792   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:17.682823   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.697921   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:17.697959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:17.770591   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:17.770617   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:17.770633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:15.665831   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.165059   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:16.014130   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:17.817443   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.316576   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.350181   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:20.363671   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:20.363743   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:20.399858   80762 cri.go:89] found id: ""
	I0612 21:40:20.399889   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.399896   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:20.399903   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:20.399963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:20.437715   80762 cri.go:89] found id: ""
	I0612 21:40:20.437755   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.437766   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:20.437776   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:20.437843   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:20.472525   80762 cri.go:89] found id: ""
	I0612 21:40:20.472558   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.472573   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:20.472582   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:20.472642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:20.507923   80762 cri.go:89] found id: ""
	I0612 21:40:20.507948   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.507959   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:20.507966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:20.508029   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:20.545471   80762 cri.go:89] found id: ""
	I0612 21:40:20.545502   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.545512   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:20.545519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:20.545586   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:20.583793   80762 cri.go:89] found id: ""
	I0612 21:40:20.583829   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.583839   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:20.583846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:20.583912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:20.624399   80762 cri.go:89] found id: ""
	I0612 21:40:20.624438   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.624449   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:20.624467   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:20.624530   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:20.665158   80762 cri.go:89] found id: ""
	I0612 21:40:20.665184   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.665194   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:20.665203   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:20.665217   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:20.743062   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:20.743101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:20.792573   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:20.792613   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:20.847998   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:20.848033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:20.863447   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:20.863497   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:20.938020   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:20.165455   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.665110   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.665262   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.513556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.014750   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.316950   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.815377   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:26.817066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.438289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:23.453792   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:23.453855   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:23.494044   80762 cri.go:89] found id: ""
	I0612 21:40:23.494070   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.494077   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:23.494083   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:23.494144   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:23.533278   80762 cri.go:89] found id: ""
	I0612 21:40:23.533305   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.533313   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:23.533319   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:23.533380   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:23.568504   80762 cri.go:89] found id: ""
	I0612 21:40:23.568538   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.568549   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:23.568556   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:23.568619   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:23.610596   80762 cri.go:89] found id: ""
	I0612 21:40:23.610624   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.610633   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:23.610638   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:23.610690   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:23.651856   80762 cri.go:89] found id: ""
	I0612 21:40:23.651886   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.651896   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:23.651903   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:23.651978   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:23.690989   80762 cri.go:89] found id: ""
	I0612 21:40:23.691020   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.691030   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:23.691036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:23.691089   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:23.730417   80762 cri.go:89] found id: ""
	I0612 21:40:23.730454   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.730467   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:23.730476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:23.730538   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:23.773887   80762 cri.go:89] found id: ""
	I0612 21:40:23.773913   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.773921   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:23.773932   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:23.773947   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:23.825771   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:23.825805   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:23.840136   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:23.840163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:23.933645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:23.933670   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:23.933686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:24.020205   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:24.020243   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.566746   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:26.579557   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:26.579612   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:26.614721   80762 cri.go:89] found id: ""
	I0612 21:40:26.614749   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.614757   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:26.614763   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:26.614815   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:26.651398   80762 cri.go:89] found id: ""
	I0612 21:40:26.651427   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.651437   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:26.651445   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:26.651506   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:26.688217   80762 cri.go:89] found id: ""
	I0612 21:40:26.688249   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.688261   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:26.688268   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:26.688333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:26.721316   80762 cri.go:89] found id: ""
	I0612 21:40:26.721346   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.721357   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:26.721364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:26.721424   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:26.758842   80762 cri.go:89] found id: ""
	I0612 21:40:26.758868   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.758878   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:26.758885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:26.758957   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:26.795696   80762 cri.go:89] found id: ""
	I0612 21:40:26.795725   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.795733   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:26.795738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:26.795788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:26.834903   80762 cri.go:89] found id: ""
	I0612 21:40:26.834932   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.834941   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:26.834947   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:26.835020   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:26.872751   80762 cri.go:89] found id: ""
	I0612 21:40:26.872788   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.872796   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:26.872805   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:26.872817   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:26.952401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:26.952440   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.990548   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:26.990583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:27.042973   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:27.043029   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:27.058348   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:27.058379   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:27.133047   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:26.666430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.165063   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:25.513982   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:28.012556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:30.017664   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.315668   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:31.316817   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.634105   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:29.654113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:29.654171   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:29.700138   80762 cri.go:89] found id: ""
	I0612 21:40:29.700169   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.700179   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:29.700188   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:29.700260   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:29.751599   80762 cri.go:89] found id: ""
	I0612 21:40:29.751628   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.751638   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:29.751646   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:29.751699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:29.801971   80762 cri.go:89] found id: ""
	I0612 21:40:29.801995   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.802003   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:29.802008   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:29.802059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:29.839381   80762 cri.go:89] found id: ""
	I0612 21:40:29.839407   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.839418   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:29.839426   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:29.839484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:29.876634   80762 cri.go:89] found id: ""
	I0612 21:40:29.876661   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.876668   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:29.876675   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:29.876721   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:29.909673   80762 cri.go:89] found id: ""
	I0612 21:40:29.909707   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.909718   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:29.909726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:29.909791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:29.947984   80762 cri.go:89] found id: ""
	I0612 21:40:29.948019   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.948029   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:29.948037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:29.948099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:29.988611   80762 cri.go:89] found id: ""
	I0612 21:40:29.988639   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.988650   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:29.988660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:29.988675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:30.073180   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:30.073216   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:30.114703   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:30.114732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:30.173242   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:30.173278   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:30.189081   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:30.189112   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:30.263564   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:32.763967   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:32.776738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:32.776808   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:32.813088   80762 cri.go:89] found id: ""
	I0612 21:40:32.813115   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.813125   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:32.813132   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:32.813195   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:32.850960   80762 cri.go:89] found id: ""
	I0612 21:40:32.850987   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.850996   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:32.851004   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:32.851065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:31.166578   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.669302   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.512480   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:34.512817   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.815867   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:35.817105   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.887229   80762 cri.go:89] found id: ""
	I0612 21:40:32.887259   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.887270   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:32.887277   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:32.887346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:32.923123   80762 cri.go:89] found id: ""
	I0612 21:40:32.923148   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.923158   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:32.923164   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:32.923242   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:32.962603   80762 cri.go:89] found id: ""
	I0612 21:40:32.962628   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.962638   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:32.962644   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:32.962695   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:32.998971   80762 cri.go:89] found id: ""
	I0612 21:40:32.999025   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.999037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:32.999046   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:32.999120   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:33.037640   80762 cri.go:89] found id: ""
	I0612 21:40:33.037670   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.037680   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:33.037686   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:33.037748   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:33.073758   80762 cri.go:89] found id: ""
	I0612 21:40:33.073787   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.073794   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:33.073804   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:33.073815   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:33.124478   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:33.124512   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:33.139010   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:33.139036   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:33.207693   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:33.207716   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:33.207732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:33.287710   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:33.287746   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:35.831654   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:35.845783   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:35.845845   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:35.882097   80762 cri.go:89] found id: ""
	I0612 21:40:35.882129   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.882141   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:35.882149   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:35.882205   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:35.920931   80762 cri.go:89] found id: ""
	I0612 21:40:35.920972   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.920980   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:35.920985   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:35.921061   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:35.958689   80762 cri.go:89] found id: ""
	I0612 21:40:35.958712   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.958721   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:35.958726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:35.958774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:35.994973   80762 cri.go:89] found id: ""
	I0612 21:40:35.995028   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.995040   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:35.995048   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:35.995114   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:36.035679   80762 cri.go:89] found id: ""
	I0612 21:40:36.035707   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.035715   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:36.035721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:36.035768   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:36.071498   80762 cri.go:89] found id: ""
	I0612 21:40:36.071525   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.071534   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:36.071544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:36.071594   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:36.107367   80762 cri.go:89] found id: ""
	I0612 21:40:36.107397   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.107406   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:36.107413   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:36.107466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:36.148668   80762 cri.go:89] found id: ""
	I0612 21:40:36.148699   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.148710   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:36.148721   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:36.148736   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:36.207719   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:36.207765   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:36.223129   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:36.223158   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:36.290786   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:36.290809   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:36.290822   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:36.375361   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:36.375398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:36.165430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.165989   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:37.015936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:39.513497   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.318886   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:40.815802   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.921100   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:38.935420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:38.935491   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:38.970519   80762 cri.go:89] found id: ""
	I0612 21:40:38.970548   80762 logs.go:276] 0 containers: []
	W0612 21:40:38.970559   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:38.970567   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:38.970639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:39.005866   80762 cri.go:89] found id: ""
	I0612 21:40:39.005888   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.005896   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:39.005902   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:39.005954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:39.043619   80762 cri.go:89] found id: ""
	I0612 21:40:39.043647   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.043655   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:39.043661   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:39.043709   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:39.081311   80762 cri.go:89] found id: ""
	I0612 21:40:39.081336   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.081344   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:39.081350   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:39.081410   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:39.117326   80762 cri.go:89] found id: ""
	I0612 21:40:39.117358   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.117367   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:39.117372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:39.117423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:39.151785   80762 cri.go:89] found id: ""
	I0612 21:40:39.151819   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.151828   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:39.151835   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:39.151899   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:39.187031   80762 cri.go:89] found id: ""
	I0612 21:40:39.187057   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.187065   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:39.187071   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:39.187119   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:39.222186   80762 cri.go:89] found id: ""
	I0612 21:40:39.222212   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.222223   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:39.222233   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:39.222245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:39.276126   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:39.276164   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:39.291631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:39.291658   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:39.365615   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:39.365641   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:39.365659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:39.442548   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:39.442600   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:41.980840   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:41.996629   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:41.996686   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:42.034158   80762 cri.go:89] found id: ""
	I0612 21:40:42.034186   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.034195   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:42.034202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:42.034274   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:42.070981   80762 cri.go:89] found id: ""
	I0612 21:40:42.071011   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.071021   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:42.071028   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:42.071093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:42.108282   80762 cri.go:89] found id: ""
	I0612 21:40:42.108309   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.108316   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:42.108322   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:42.108369   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:42.146394   80762 cri.go:89] found id: ""
	I0612 21:40:42.146423   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.146434   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:42.146454   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:42.146539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:42.183577   80762 cri.go:89] found id: ""
	I0612 21:40:42.183601   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.183608   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:42.183614   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:42.183662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:42.222069   80762 cri.go:89] found id: ""
	I0612 21:40:42.222100   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.222109   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:42.222115   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:42.222168   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:42.259128   80762 cri.go:89] found id: ""
	I0612 21:40:42.259155   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.259164   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:42.259192   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:42.259282   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:42.296321   80762 cri.go:89] found id: ""
	I0612 21:40:42.296354   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.296368   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:42.296380   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:42.296400   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:42.311098   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:42.311137   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:42.386116   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:42.386144   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:42.386163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:42.467016   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:42.467054   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:42.509143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:42.509180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:40.166288   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.664817   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.665596   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.017043   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.513368   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.816702   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.316890   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.062872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:45.076570   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:45.076658   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:45.114362   80762 cri.go:89] found id: ""
	I0612 21:40:45.114394   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.114404   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:45.114412   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:45.114478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:45.151577   80762 cri.go:89] found id: ""
	I0612 21:40:45.151609   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.151620   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:45.151627   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:45.151689   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:45.188753   80762 cri.go:89] found id: ""
	I0612 21:40:45.188785   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.188795   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:45.188802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:45.188861   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:45.224775   80762 cri.go:89] found id: ""
	I0612 21:40:45.224801   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.224808   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:45.224814   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:45.224873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:45.260440   80762 cri.go:89] found id: ""
	I0612 21:40:45.260472   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.260483   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:45.260490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:45.260547   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:45.297662   80762 cri.go:89] found id: ""
	I0612 21:40:45.297697   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.297709   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:45.297716   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:45.297774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:45.335637   80762 cri.go:89] found id: ""
	I0612 21:40:45.335669   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.335682   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:45.335690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:45.335753   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:45.371523   80762 cri.go:89] found id: ""
	I0612 21:40:45.371580   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.371590   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:45.371599   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:45.371610   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:45.424029   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:45.424065   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:45.440339   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:45.440378   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:45.509504   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:45.509526   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:45.509541   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:45.591857   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:45.591893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:47.166437   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.665544   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.016561   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.511894   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.320090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.816816   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:48.135912   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:48.151271   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:48.151331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:48.192740   80762 cri.go:89] found id: ""
	I0612 21:40:48.192775   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.192788   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:48.192798   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:48.192875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:48.230440   80762 cri.go:89] found id: ""
	I0612 21:40:48.230469   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.230479   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:48.230487   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:48.230549   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:48.270892   80762 cri.go:89] found id: ""
	I0612 21:40:48.270922   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.270933   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:48.270941   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:48.270996   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:48.308555   80762 cri.go:89] found id: ""
	I0612 21:40:48.308580   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.308588   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:48.308594   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:48.308640   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:48.342705   80762 cri.go:89] found id: ""
	I0612 21:40:48.342727   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.342735   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:48.342741   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:48.342788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:48.377418   80762 cri.go:89] found id: ""
	I0612 21:40:48.377450   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.377461   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:48.377468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:48.377535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:48.413092   80762 cri.go:89] found id: ""
	I0612 21:40:48.413126   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.413141   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:48.413149   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:48.413215   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:48.447673   80762 cri.go:89] found id: ""
	I0612 21:40:48.447699   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.447708   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:48.447716   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:48.447728   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:48.488508   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:48.488542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:48.540573   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:48.540608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:48.554735   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:48.554762   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:48.632074   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:48.632098   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:48.632117   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.212336   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:51.227428   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:51.227493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:51.268124   80762 cri.go:89] found id: ""
	I0612 21:40:51.268157   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.268167   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:51.268172   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:51.268220   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:51.305751   80762 cri.go:89] found id: ""
	I0612 21:40:51.305777   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.305785   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:51.305793   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:51.305849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:51.347292   80762 cri.go:89] found id: ""
	I0612 21:40:51.347318   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.347325   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:51.347332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:51.347394   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:51.387476   80762 cri.go:89] found id: ""
	I0612 21:40:51.387501   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.387509   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:51.387515   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:51.387573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:51.431992   80762 cri.go:89] found id: ""
	I0612 21:40:51.432019   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.432029   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:51.432036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:51.432096   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:51.477204   80762 cri.go:89] found id: ""
	I0612 21:40:51.477235   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.477246   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:51.477254   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:51.477346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:51.518449   80762 cri.go:89] found id: ""
	I0612 21:40:51.518477   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.518488   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:51.518502   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:51.518562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:51.554991   80762 cri.go:89] found id: ""
	I0612 21:40:51.555015   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.555024   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:51.555033   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:51.555046   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:51.606732   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:51.606769   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:51.620512   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:51.620538   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:51.697029   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:51.697058   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:51.697074   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.775401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:51.775437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:51.666561   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.166247   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:51.512909   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.012887   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:52.315904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.316764   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.816819   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.318059   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:54.331420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:54.331509   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:54.367886   80762 cri.go:89] found id: ""
	I0612 21:40:54.367926   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.367948   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:54.367959   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:54.368047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:54.403998   80762 cri.go:89] found id: ""
	I0612 21:40:54.404023   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.404034   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:54.404041   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:54.404108   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:54.441449   80762 cri.go:89] found id: ""
	I0612 21:40:54.441480   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.441491   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:54.441498   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:54.441557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:54.476459   80762 cri.go:89] found id: ""
	I0612 21:40:54.476490   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.476500   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:54.476508   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:54.476573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:54.515337   80762 cri.go:89] found id: ""
	I0612 21:40:54.515360   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.515368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:54.515374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:54.515423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:54.551447   80762 cri.go:89] found id: ""
	I0612 21:40:54.551468   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.551475   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:54.551481   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:54.551528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:54.587082   80762 cri.go:89] found id: ""
	I0612 21:40:54.587114   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.587125   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:54.587145   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:54.587225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:54.624211   80762 cri.go:89] found id: ""
	I0612 21:40:54.624235   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.624257   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:54.624268   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:54.624282   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:54.677816   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:54.677848   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:54.693725   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:54.693749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:54.772229   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:54.772255   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:54.772273   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:54.852543   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:54.852578   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:57.397722   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:57.411082   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:57.411145   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:57.449633   80762 cri.go:89] found id: ""
	I0612 21:40:57.449662   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.449673   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:57.449680   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:57.449745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:57.489855   80762 cri.go:89] found id: ""
	I0612 21:40:57.489880   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.489889   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:57.489894   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:57.489952   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:57.528986   80762 cri.go:89] found id: ""
	I0612 21:40:57.529006   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.529014   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:57.529019   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:57.529081   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:57.566701   80762 cri.go:89] found id: ""
	I0612 21:40:57.566730   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.566739   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:57.566746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:57.566800   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:57.601114   80762 cri.go:89] found id: ""
	I0612 21:40:57.601137   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.601145   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:57.601151   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:57.601212   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:57.636120   80762 cri.go:89] found id: ""
	I0612 21:40:57.636145   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.636155   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:57.636163   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:57.636225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:57.676912   80762 cri.go:89] found id: ""
	I0612 21:40:57.676953   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.676960   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:57.676966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:57.677039   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:57.714671   80762 cri.go:89] found id: ""
	I0612 21:40:57.714691   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.714699   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:57.714707   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:57.714720   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:57.770550   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:57.770583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:57.785062   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:57.785093   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:57.853448   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:57.853468   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:57.853480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:56.167768   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.665108   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.014274   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.014535   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.816961   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.817450   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:57.939957   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:57.939999   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.493469   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:00.509746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:00.509819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:00.546582   80762 cri.go:89] found id: ""
	I0612 21:41:00.546610   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.546620   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:00.546629   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:00.546683   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:00.584229   80762 cri.go:89] found id: ""
	I0612 21:41:00.584256   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.584264   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:00.584269   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:00.584337   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:00.618679   80762 cri.go:89] found id: ""
	I0612 21:41:00.618704   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.618712   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:00.618719   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:00.618778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:00.656336   80762 cri.go:89] found id: ""
	I0612 21:41:00.656364   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.656375   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:00.656384   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:00.656457   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:00.694147   80762 cri.go:89] found id: ""
	I0612 21:41:00.694173   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.694182   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:00.694187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:00.694236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:00.733964   80762 cri.go:89] found id: ""
	I0612 21:41:00.733994   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.734006   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:00.734014   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:00.734076   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:00.771245   80762 cri.go:89] found id: ""
	I0612 21:41:00.771274   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.771287   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:00.771293   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:00.771357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:00.809118   80762 cri.go:89] found id: ""
	I0612 21:41:00.809150   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.809158   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:00.809168   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:00.809188   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:00.863479   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:00.863514   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:00.878749   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:00.878783   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:00.955800   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:00.955825   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:00.955844   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:01.037587   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:01.037618   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.666373   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.165203   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.513805   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.017922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.317115   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.817907   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.583063   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:03.597656   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:03.597732   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:03.633233   80762 cri.go:89] found id: ""
	I0612 21:41:03.633263   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.633283   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:03.633291   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:03.633357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:03.679900   80762 cri.go:89] found id: ""
	I0612 21:41:03.679930   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.679941   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:03.679948   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:03.680018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:03.718766   80762 cri.go:89] found id: ""
	I0612 21:41:03.718792   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.718800   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:03.718811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:03.718868   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:03.759404   80762 cri.go:89] found id: ""
	I0612 21:41:03.759429   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.759437   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:03.759443   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:03.759496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:03.794313   80762 cri.go:89] found id: ""
	I0612 21:41:03.794348   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.794357   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:03.794364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:03.794430   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:03.832525   80762 cri.go:89] found id: ""
	I0612 21:41:03.832546   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.832554   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:03.832559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:03.832607   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:03.872841   80762 cri.go:89] found id: ""
	I0612 21:41:03.872868   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.872878   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:03.872885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:03.872945   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:03.912736   80762 cri.go:89] found id: ""
	I0612 21:41:03.912760   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.912770   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:03.912781   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:03.912794   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:03.986645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:03.986672   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:03.986688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:04.066766   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:04.066799   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:04.108219   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:04.108250   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:04.168866   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:04.168911   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:06.684232   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:06.698359   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:06.698443   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:06.735324   80762 cri.go:89] found id: ""
	I0612 21:41:06.735350   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.735359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:06.735364   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:06.735418   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:06.771763   80762 cri.go:89] found id: ""
	I0612 21:41:06.771786   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.771794   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:06.771799   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:06.771850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:06.808151   80762 cri.go:89] found id: ""
	I0612 21:41:06.808179   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.808188   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:06.808193   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:06.808263   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:06.846099   80762 cri.go:89] found id: ""
	I0612 21:41:06.846121   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.846129   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:06.846134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:06.846182   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:06.883559   80762 cri.go:89] found id: ""
	I0612 21:41:06.883584   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.883591   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:06.883597   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:06.883645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:06.920799   80762 cri.go:89] found id: ""
	I0612 21:41:06.920830   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.920841   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:06.920849   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:06.920914   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:06.964441   80762 cri.go:89] found id: ""
	I0612 21:41:06.964472   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.964482   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:06.964490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:06.964561   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:07.000866   80762 cri.go:89] found id: ""
	I0612 21:41:07.000901   80762 logs.go:276] 0 containers: []
	W0612 21:41:07.000912   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:07.000924   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:07.000993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:07.017074   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:07.017121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:07.093873   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:07.093901   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:07.093925   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:07.171258   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:07.171293   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:07.212588   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:07.212624   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:05.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.665354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.665558   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.512109   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.512615   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.513483   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:08.316327   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:10.316456   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.767332   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:09.781184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:09.781249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:09.818966   80762 cri.go:89] found id: ""
	I0612 21:41:09.818999   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.819008   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:09.819014   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:09.819064   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:09.854714   80762 cri.go:89] found id: ""
	I0612 21:41:09.854742   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.854760   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:09.854772   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:09.854823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:09.891229   80762 cri.go:89] found id: ""
	I0612 21:41:09.891257   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.891268   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:09.891274   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:09.891335   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:09.928569   80762 cri.go:89] found id: ""
	I0612 21:41:09.928598   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.928610   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:09.928617   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:09.928680   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:09.963681   80762 cri.go:89] found id: ""
	I0612 21:41:09.963714   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.963725   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:09.963733   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:09.963819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:10.002340   80762 cri.go:89] found id: ""
	I0612 21:41:10.002368   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.002380   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:10.002388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:10.002454   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:10.041935   80762 cri.go:89] found id: ""
	I0612 21:41:10.041961   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.041972   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:10.041979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:10.042047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:10.080541   80762 cri.go:89] found id: ""
	I0612 21:41:10.080571   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.080584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:10.080598   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:10.080614   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:10.140904   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:10.140944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:10.176646   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:10.176688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:10.272147   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:10.272169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:10.272183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:10.352649   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:10.352686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:12.166618   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.665896   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.013177   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.013716   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.317177   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.317391   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.815940   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.896274   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:12.911147   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:12.911231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:12.947628   80762 cri.go:89] found id: ""
	I0612 21:41:12.947651   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.947660   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:12.947665   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:12.947726   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:12.982813   80762 cri.go:89] found id: ""
	I0612 21:41:12.982837   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.982845   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:12.982851   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:12.982898   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:13.021360   80762 cri.go:89] found id: ""
	I0612 21:41:13.021403   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.021412   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:13.021417   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:13.021468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:13.063534   80762 cri.go:89] found id: ""
	I0612 21:41:13.063566   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.063576   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:13.063585   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:13.063666   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:13.098767   80762 cri.go:89] found id: ""
	I0612 21:41:13.098796   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.098807   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:13.098816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:13.098878   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:13.140764   80762 cri.go:89] found id: ""
	I0612 21:41:13.140797   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.140809   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:13.140816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:13.140882   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:13.180356   80762 cri.go:89] found id: ""
	I0612 21:41:13.180400   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.180413   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:13.180420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:13.180482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:13.215198   80762 cri.go:89] found id: ""
	I0612 21:41:13.215227   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.215238   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:13.215249   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:13.215265   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:13.273143   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:13.273182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:13.287861   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:13.287893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:13.366052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:13.366073   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:13.366099   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:13.450980   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:13.451015   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:15.991386   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:16.005618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:16.005699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:16.047253   80762 cri.go:89] found id: ""
	I0612 21:41:16.047281   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.047289   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:16.047295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:16.047356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:16.082860   80762 cri.go:89] found id: ""
	I0612 21:41:16.082886   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.082894   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:16.082899   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:16.082948   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:16.123127   80762 cri.go:89] found id: ""
	I0612 21:41:16.123152   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.123164   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:16.123187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:16.123247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:16.167155   80762 cri.go:89] found id: ""
	I0612 21:41:16.167189   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.167199   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:16.167207   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:16.167276   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:16.204036   80762 cri.go:89] found id: ""
	I0612 21:41:16.204061   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.204071   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:16.204079   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:16.204140   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:16.246672   80762 cri.go:89] found id: ""
	I0612 21:41:16.246701   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.246712   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:16.246721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:16.246785   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:16.286820   80762 cri.go:89] found id: ""
	I0612 21:41:16.286849   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.286857   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:16.286864   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:16.286919   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:16.326622   80762 cri.go:89] found id: ""
	I0612 21:41:16.326649   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.326660   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:16.326667   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:16.326678   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:16.407492   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:16.407525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:16.448207   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:16.448236   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:16.501675   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:16.501714   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:16.518173   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:16.518202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:16.592582   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:17.166163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.167204   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.514405   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.016197   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:18.816596   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:20.817504   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.093054   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:19.107926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:19.108002   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:19.149386   80762 cri.go:89] found id: ""
	I0612 21:41:19.149411   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.149421   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:19.149429   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:19.149493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:19.188092   80762 cri.go:89] found id: ""
	I0612 21:41:19.188120   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.188131   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:19.188139   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:19.188201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:19.227203   80762 cri.go:89] found id: ""
	I0612 21:41:19.227229   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.227235   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:19.227242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:19.227301   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:19.269187   80762 cri.go:89] found id: ""
	I0612 21:41:19.269217   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.269225   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:19.269232   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:19.269294   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:19.305394   80762 cri.go:89] found id: ""
	I0612 21:41:19.305418   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.305425   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:19.305431   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:19.305480   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:19.347794   80762 cri.go:89] found id: ""
	I0612 21:41:19.347825   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.347837   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:19.347846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:19.347907   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:19.384314   80762 cri.go:89] found id: ""
	I0612 21:41:19.384346   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.384364   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:19.384372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:19.384428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:19.421782   80762 cri.go:89] found id: ""
	I0612 21:41:19.421811   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.421822   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:19.421834   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:19.421849   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:19.475969   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:19.476000   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:19.490683   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:19.490710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:19.574492   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:19.574513   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:19.574525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:19.662288   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:19.662324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:22.205404   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:22.220217   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:22.220297   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:22.256870   80762 cri.go:89] found id: ""
	I0612 21:41:22.256901   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.256913   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:22.256921   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:22.256984   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:22.290380   80762 cri.go:89] found id: ""
	I0612 21:41:22.290413   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.290425   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:22.290433   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:22.290497   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:22.324981   80762 cri.go:89] found id: ""
	I0612 21:41:22.325010   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.325019   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:22.325024   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:22.325093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:22.362900   80762 cri.go:89] found id: ""
	I0612 21:41:22.362926   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.362938   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:22.362946   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:22.363008   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:22.399004   80762 cri.go:89] found id: ""
	I0612 21:41:22.399037   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.399048   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:22.399056   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:22.399116   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:22.434306   80762 cri.go:89] found id: ""
	I0612 21:41:22.434341   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.434355   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:22.434365   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:22.434422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:22.479085   80762 cri.go:89] found id: ""
	I0612 21:41:22.479116   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.479129   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:22.479142   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:22.479228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:22.516730   80762 cri.go:89] found id: ""
	I0612 21:41:22.516755   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.516761   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:22.516769   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:22.516780   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:22.570921   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:22.570957   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:22.585409   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:22.585437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:22.667314   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:22.667342   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:22.667360   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:22.743865   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:22.743901   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:21.170060   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.666364   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:21.021658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.512541   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.316232   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.816641   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.282306   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:25.297334   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:25.297407   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:25.336610   80762 cri.go:89] found id: ""
	I0612 21:41:25.336641   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.336654   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:25.336662   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:25.336729   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:25.373307   80762 cri.go:89] found id: ""
	I0612 21:41:25.373338   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.373350   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:25.373358   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:25.373425   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:25.413141   80762 cri.go:89] found id: ""
	I0612 21:41:25.413169   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.413177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:25.413183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:25.413233   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:25.450810   80762 cri.go:89] found id: ""
	I0612 21:41:25.450842   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.450853   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:25.450862   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:25.450924   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:25.487017   80762 cri.go:89] found id: ""
	I0612 21:41:25.487049   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.487255   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:25.487269   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:25.487328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:25.524335   80762 cri.go:89] found id: ""
	I0612 21:41:25.524361   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.524371   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:25.524377   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:25.524428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:25.560394   80762 cri.go:89] found id: ""
	I0612 21:41:25.560421   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.560429   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:25.560435   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:25.560482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:25.599334   80762 cri.go:89] found id: ""
	I0612 21:41:25.599362   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.599372   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:25.599384   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:25.599399   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:25.680153   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:25.680195   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:25.726336   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:25.726377   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:25.777064   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:25.777098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:25.791978   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:25.792007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:25.868860   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:25.667028   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.164920   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.514249   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.012042   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.013658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.316543   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.816789   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.369099   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:28.382729   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:28.382786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:28.423835   80762 cri.go:89] found id: ""
	I0612 21:41:28.423865   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.423875   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:28.423889   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:28.423953   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:28.463098   80762 cri.go:89] found id: ""
	I0612 21:41:28.463127   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.463137   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:28.463144   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:28.463223   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:28.499678   80762 cri.go:89] found id: ""
	I0612 21:41:28.499707   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.499718   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:28.499726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:28.499786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:28.536057   80762 cri.go:89] found id: ""
	I0612 21:41:28.536089   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.536101   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:28.536108   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:28.536180   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:28.571052   80762 cri.go:89] found id: ""
	I0612 21:41:28.571080   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.571090   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:28.571098   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:28.571162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:28.607320   80762 cri.go:89] found id: ""
	I0612 21:41:28.607348   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.607360   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:28.607368   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:28.607427   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:28.643000   80762 cri.go:89] found id: ""
	I0612 21:41:28.643029   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.643037   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:28.643042   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:28.643113   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:28.684134   80762 cri.go:89] found id: ""
	I0612 21:41:28.684164   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.684175   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:28.684186   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:28.684201   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:28.737059   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:28.737092   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:28.753290   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:28.753320   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:28.826964   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:28.826990   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:28.827009   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:28.908874   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:28.908919   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:31.450362   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:31.465831   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:31.465912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:31.507441   80762 cri.go:89] found id: ""
	I0612 21:41:31.507465   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.507474   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:31.507482   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:31.507546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:31.541403   80762 cri.go:89] found id: ""
	I0612 21:41:31.541437   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.541450   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:31.541458   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:31.541524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:31.576367   80762 cri.go:89] found id: ""
	I0612 21:41:31.576393   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.576405   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:31.576412   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:31.576475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:31.615053   80762 cri.go:89] found id: ""
	I0612 21:41:31.615081   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.615091   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:31.615099   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:31.615159   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:31.650460   80762 cri.go:89] found id: ""
	I0612 21:41:31.650495   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.650504   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:31.650511   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:31.650580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:31.690764   80762 cri.go:89] found id: ""
	I0612 21:41:31.690792   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.690803   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:31.690810   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:31.690870   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:31.729785   80762 cri.go:89] found id: ""
	I0612 21:41:31.729809   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.729817   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:31.729822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:31.729881   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:31.772978   80762 cri.go:89] found id: ""
	I0612 21:41:31.773005   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.773013   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:31.773023   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:31.773038   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:31.830451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:31.830484   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:31.846821   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:31.846850   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:31.927289   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:31.927328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:31.927358   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:32.004814   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:32.004852   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:30.165423   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.165695   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.664959   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.515104   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:33.316674   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:35.816686   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.550931   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:34.567559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:34.567618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:34.602234   80762 cri.go:89] found id: ""
	I0612 21:41:34.602260   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.602267   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:34.602273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:34.602318   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:34.639575   80762 cri.go:89] found id: ""
	I0612 21:41:34.639598   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.639605   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:34.639610   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:34.639656   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:34.681325   80762 cri.go:89] found id: ""
	I0612 21:41:34.681360   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.681368   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:34.681374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:34.681478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:34.721405   80762 cri.go:89] found id: ""
	I0612 21:41:34.721432   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.721444   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:34.721451   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:34.721517   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:34.764344   80762 cri.go:89] found id: ""
	I0612 21:41:34.764375   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.764386   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:34.764394   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:34.764459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:34.802083   80762 cri.go:89] found id: ""
	I0612 21:41:34.802107   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.802115   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:34.802121   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:34.802181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:34.843418   80762 cri.go:89] found id: ""
	I0612 21:41:34.843441   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.843450   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:34.843455   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:34.843501   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:34.877803   80762 cri.go:89] found id: ""
	I0612 21:41:34.877832   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.877842   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:34.877852   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:34.877867   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:34.930515   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:34.930545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:34.943705   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:34.943729   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:35.024912   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:35.024931   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:35.024941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:35.109129   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:35.109165   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:37.651788   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:37.667901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:37.667964   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:37.709599   80762 cri.go:89] found id: ""
	I0612 21:41:37.709627   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.709637   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:37.709645   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:37.709700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:37.747150   80762 cri.go:89] found id: ""
	I0612 21:41:37.747191   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.747204   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:37.747212   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:37.747273   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:37.785528   80762 cri.go:89] found id: ""
	I0612 21:41:37.785552   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.785560   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:37.785567   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:37.785614   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:37.822363   80762 cri.go:89] found id: ""
	I0612 21:41:37.822390   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.822400   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:37.822408   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:37.822468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:36.666054   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.165222   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.012397   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.012503   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:38.317132   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:40.821114   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.858285   80762 cri.go:89] found id: ""
	I0612 21:41:37.858395   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.858409   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:37.858416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:37.858466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:37.897500   80762 cri.go:89] found id: ""
	I0612 21:41:37.897542   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.897556   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:37.897574   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:37.897635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:37.937878   80762 cri.go:89] found id: ""
	I0612 21:41:37.937905   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.937921   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:37.937927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:37.937985   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:37.978282   80762 cri.go:89] found id: ""
	I0612 21:41:37.978310   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.978319   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:37.978327   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:37.978341   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:38.055864   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:38.055890   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:38.055903   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:38.135883   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:38.135918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:38.178641   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:38.178668   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:38.236635   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:38.236686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:40.759426   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:40.773526   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:40.773598   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:40.819130   80762 cri.go:89] found id: ""
	I0612 21:41:40.819161   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.819190   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:40.819202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:40.819264   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:40.856176   80762 cri.go:89] found id: ""
	I0612 21:41:40.856204   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.856216   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:40.856224   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:40.856287   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:40.896709   80762 cri.go:89] found id: ""
	I0612 21:41:40.896739   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.896750   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:40.896759   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:40.896820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:40.936431   80762 cri.go:89] found id: ""
	I0612 21:41:40.936457   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.936465   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:40.936471   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:40.936528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:40.979773   80762 cri.go:89] found id: ""
	I0612 21:41:40.979809   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.979820   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:40.979828   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:40.979892   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:41.023885   80762 cri.go:89] found id: ""
	I0612 21:41:41.023910   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.023919   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:41.023925   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:41.024004   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:41.070370   80762 cri.go:89] found id: ""
	I0612 21:41:41.070396   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.070405   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:41.070411   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:41.070467   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:41.115282   80762 cri.go:89] found id: ""
	I0612 21:41:41.115311   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.115321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:41.115332   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:41.115346   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:41.128680   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:41.128710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:41.206100   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:41.206125   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:41.206140   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:41.283499   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:41.283536   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:41.323275   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:41.323307   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:41.166258   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.666600   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:41.013379   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.316659   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.816066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.875750   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:43.890156   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:43.890216   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:43.935105   80762 cri.go:89] found id: ""
	I0612 21:41:43.935135   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.935147   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:43.935155   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:43.935236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:43.980929   80762 cri.go:89] found id: ""
	I0612 21:41:43.980958   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.980967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:43.980973   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:43.981051   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:44.029387   80762 cri.go:89] found id: ""
	I0612 21:41:44.029409   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.029416   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:44.029421   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:44.029483   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:44.067415   80762 cri.go:89] found id: ""
	I0612 21:41:44.067449   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.067460   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:44.067468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:44.067528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:44.105093   80762 cri.go:89] found id: ""
	I0612 21:41:44.105117   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.105125   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:44.105131   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:44.105178   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:44.142647   80762 cri.go:89] found id: ""
	I0612 21:41:44.142680   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.142691   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:44.142699   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:44.142759   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:44.182725   80762 cri.go:89] found id: ""
	I0612 21:41:44.182756   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.182767   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:44.182775   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:44.182836   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:44.219538   80762 cri.go:89] found id: ""
	I0612 21:41:44.219568   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.219580   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:44.219593   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:44.219608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:44.272234   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:44.272267   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:44.285631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:44.285663   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:44.362453   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:44.362470   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:44.362482   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:44.444624   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:44.444656   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.985731   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:46.999749   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:46.999819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:47.035051   80762 cri.go:89] found id: ""
	I0612 21:41:47.035073   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.035080   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:47.035086   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:47.035136   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:47.077929   80762 cri.go:89] found id: ""
	I0612 21:41:47.077964   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.077975   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:47.077982   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:47.078062   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:47.111621   80762 cri.go:89] found id: ""
	I0612 21:41:47.111660   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.111671   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:47.111679   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:47.111744   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:47.150700   80762 cri.go:89] found id: ""
	I0612 21:41:47.150725   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.150733   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:47.150739   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:47.150787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:47.189547   80762 cri.go:89] found id: ""
	I0612 21:41:47.189576   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.189586   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:47.189593   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:47.189660   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:47.229482   80762 cri.go:89] found id: ""
	I0612 21:41:47.229510   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.229522   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:47.229530   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:47.229599   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:47.266798   80762 cri.go:89] found id: ""
	I0612 21:41:47.266826   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.266837   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:47.266844   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:47.266906   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:47.302256   80762 cri.go:89] found id: ""
	I0612 21:41:47.302280   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.302287   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:47.302295   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:47.302306   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:47.354485   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:47.354526   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:47.368689   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:47.368713   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:47.438219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:47.438244   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:47.438257   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:47.514199   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:47.514227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.165541   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:48.664957   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.512922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.012630   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.817136   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.317083   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.056394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:50.069348   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:50.069482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:50.106057   80762 cri.go:89] found id: ""
	I0612 21:41:50.106087   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.106097   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:50.106104   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:50.106162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:50.144532   80762 cri.go:89] found id: ""
	I0612 21:41:50.144560   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.144571   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:50.144579   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:50.144662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:50.184549   80762 cri.go:89] found id: ""
	I0612 21:41:50.184575   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.184583   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:50.184588   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:50.184648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:50.228520   80762 cri.go:89] found id: ""
	I0612 21:41:50.228555   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.228569   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:50.228578   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:50.228649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:50.265697   80762 cri.go:89] found id: ""
	I0612 21:41:50.265726   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.265737   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:50.265744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:50.265818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:50.301353   80762 cri.go:89] found id: ""
	I0612 21:41:50.301393   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.301410   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:50.301416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:50.301477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:50.337273   80762 cri.go:89] found id: ""
	I0612 21:41:50.337298   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.337306   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:50.337313   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:50.337374   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:50.383090   80762 cri.go:89] found id: ""
	I0612 21:41:50.383116   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.383126   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:50.383138   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:50.383151   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:50.454193   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:50.454240   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:50.477753   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:50.477779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:50.544052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:50.544075   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:50.544091   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:50.626441   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:50.626480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:50.666068   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.666287   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.013142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.512869   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.318942   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.816918   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.818011   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:53.168599   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:53.181682   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:53.181764   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:53.228060   80762 cri.go:89] found id: ""
	I0612 21:41:53.228096   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.228107   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:53.228115   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:53.228176   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:53.264867   80762 cri.go:89] found id: ""
	I0612 21:41:53.264890   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.264898   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:53.264903   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:53.264950   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:53.298351   80762 cri.go:89] found id: ""
	I0612 21:41:53.298378   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.298386   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:53.298392   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:53.298448   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:53.335888   80762 cri.go:89] found id: ""
	I0612 21:41:53.335910   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.335917   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:53.335922   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:53.335980   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:53.376131   80762 cri.go:89] found id: ""
	I0612 21:41:53.376166   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.376175   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:53.376183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:53.376240   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:53.412059   80762 cri.go:89] found id: ""
	I0612 21:41:53.412082   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.412088   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:53.412097   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:53.412142   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:53.446776   80762 cri.go:89] found id: ""
	I0612 21:41:53.446805   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.446816   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:53.446823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:53.446894   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:53.482411   80762 cri.go:89] found id: ""
	I0612 21:41:53.482433   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.482441   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:53.482449   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:53.482460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:53.522419   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:53.522448   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:53.573107   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:53.573141   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:53.587101   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:53.587147   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:53.665631   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:53.665660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:53.665675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.242482   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:56.255606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:56.255682   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:56.290837   80762 cri.go:89] found id: ""
	I0612 21:41:56.290861   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.290872   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:56.290880   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:56.290938   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:56.325429   80762 cri.go:89] found id: ""
	I0612 21:41:56.325458   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.325466   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:56.325471   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:56.325534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:56.359809   80762 cri.go:89] found id: ""
	I0612 21:41:56.359835   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.359845   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:56.359852   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:56.359912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:56.397775   80762 cri.go:89] found id: ""
	I0612 21:41:56.397803   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.397815   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:56.397823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:56.397884   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:56.433917   80762 cri.go:89] found id: ""
	I0612 21:41:56.433945   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.433956   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:56.433963   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:56.434028   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:56.467390   80762 cri.go:89] found id: ""
	I0612 21:41:56.467419   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.467429   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:56.467438   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:56.467496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:56.504014   80762 cri.go:89] found id: ""
	I0612 21:41:56.504048   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.504059   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:56.504067   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:56.504138   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:56.544157   80762 cri.go:89] found id: ""
	I0612 21:41:56.544187   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.544198   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:56.544209   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:56.544224   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:56.595431   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:56.595462   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:56.608897   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:56.608936   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:56.682706   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:56.682735   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:56.682749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.762598   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:56.762634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:55.166152   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:57.167363   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.666265   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.514832   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:58.515091   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.317285   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.818345   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.302898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:59.317901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:59.317976   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:59.360136   80762 cri.go:89] found id: ""
	I0612 21:41:59.360164   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.360174   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:59.360181   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:59.360249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:59.397205   80762 cri.go:89] found id: ""
	I0612 21:41:59.397233   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.397244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:59.397252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:59.397312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:59.437063   80762 cri.go:89] found id: ""
	I0612 21:41:59.437093   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.437105   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:59.437113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:59.437172   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:59.472800   80762 cri.go:89] found id: ""
	I0612 21:41:59.472827   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.472835   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:59.472843   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:59.472904   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:59.509452   80762 cri.go:89] found id: ""
	I0612 21:41:59.509474   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.509482   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:59.509487   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:59.509534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:59.546121   80762 cri.go:89] found id: ""
	I0612 21:41:59.546151   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.546162   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:59.546170   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:59.546231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:59.582983   80762 cri.go:89] found id: ""
	I0612 21:41:59.583007   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.583014   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:59.583020   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:59.583065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:59.621110   80762 cri.go:89] found id: ""
	I0612 21:41:59.621148   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.621160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:59.621171   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:59.621187   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:59.673113   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:59.673143   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:59.688106   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:59.688171   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:59.767653   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:59.767678   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:59.767692   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:59.848467   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:59.848507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.391324   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:02.406543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:02.406621   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:02.442225   80762 cri.go:89] found id: ""
	I0612 21:42:02.442255   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.442265   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:02.442273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:02.442341   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:02.479445   80762 cri.go:89] found id: ""
	I0612 21:42:02.479476   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.479487   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:02.479495   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:02.479557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:02.517654   80762 cri.go:89] found id: ""
	I0612 21:42:02.517685   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.517697   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:02.517705   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:02.517775   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:02.562743   80762 cri.go:89] found id: ""
	I0612 21:42:02.562777   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.562788   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:02.562807   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:02.562873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:02.597775   80762 cri.go:89] found id: ""
	I0612 21:42:02.597805   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.597816   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:02.597824   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:02.597886   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:02.633869   80762 cri.go:89] found id: ""
	I0612 21:42:02.633901   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.633913   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:02.633921   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:02.633979   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:02.671931   80762 cri.go:89] found id: ""
	I0612 21:42:02.671962   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.671974   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:02.671982   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:02.672044   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:02.709162   80762 cri.go:89] found id: ""
	I0612 21:42:02.709192   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.709204   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:02.709214   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:02.709228   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:02.722937   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:02.722967   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:02.798249   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:02.798275   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:02.798292   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:02.165664   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.012458   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:03.513414   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.317221   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.818062   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:02.876339   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:02.876376   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.913080   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:02.913106   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.464433   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:05.478249   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:05.478326   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:05.520742   80762 cri.go:89] found id: ""
	I0612 21:42:05.520765   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.520772   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:05.520778   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:05.520840   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:05.564864   80762 cri.go:89] found id: ""
	I0612 21:42:05.564896   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.564904   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:05.564911   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:05.564956   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:05.602917   80762 cri.go:89] found id: ""
	I0612 21:42:05.602942   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.602950   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:05.602956   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:05.603040   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:05.645073   80762 cri.go:89] found id: ""
	I0612 21:42:05.645104   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.645112   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:05.645119   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:05.645166   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:05.684133   80762 cri.go:89] found id: ""
	I0612 21:42:05.684165   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.684176   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:05.684184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:05.684249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:05.721461   80762 cri.go:89] found id: ""
	I0612 21:42:05.721489   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.721499   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:05.721506   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:05.721573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:05.756710   80762 cri.go:89] found id: ""
	I0612 21:42:05.756744   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.756755   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:05.756763   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:05.756814   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:05.792182   80762 cri.go:89] found id: ""
	I0612 21:42:05.792210   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.792220   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:05.792230   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:05.792245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:05.836597   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:05.836632   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.888704   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:05.888742   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:05.903354   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:05.903387   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:05.976146   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:05.976169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:05.976183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:06.664789   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.666830   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.013885   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.512997   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:09.316398   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.317014   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.559612   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:08.573592   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:08.573648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:08.613347   80762 cri.go:89] found id: ""
	I0612 21:42:08.613373   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.613381   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:08.613387   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:08.613449   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:08.650606   80762 cri.go:89] found id: ""
	I0612 21:42:08.650634   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.650643   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:08.650648   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:08.650692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:08.687056   80762 cri.go:89] found id: ""
	I0612 21:42:08.687087   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.687097   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:08.687105   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:08.687191   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:08.723112   80762 cri.go:89] found id: ""
	I0612 21:42:08.723138   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.723146   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:08.723161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:08.723238   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:08.764772   80762 cri.go:89] found id: ""
	I0612 21:42:08.764801   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.764812   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:08.764820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:08.764888   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:08.801914   80762 cri.go:89] found id: ""
	I0612 21:42:08.801944   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.801954   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:08.801962   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:08.802025   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:08.837991   80762 cri.go:89] found id: ""
	I0612 21:42:08.838017   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.838025   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:08.838030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:08.838084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:08.874977   80762 cri.go:89] found id: ""
	I0612 21:42:08.875016   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.875027   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:08.875039   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:08.875058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:08.931628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:08.931659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:08.946763   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:08.946791   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:09.028039   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:09.028061   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:09.028079   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:09.104350   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:09.104406   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.645114   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:11.659382   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:11.659455   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:11.702205   80762 cri.go:89] found id: ""
	I0612 21:42:11.702236   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.702246   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:11.702254   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:11.702309   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:11.748328   80762 cri.go:89] found id: ""
	I0612 21:42:11.748350   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.748357   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:11.748362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:11.748408   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:11.788980   80762 cri.go:89] found id: ""
	I0612 21:42:11.789009   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.789020   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:11.789027   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:11.789083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:11.829886   80762 cri.go:89] found id: ""
	I0612 21:42:11.829910   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.829920   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:11.829928   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:11.830006   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:11.870088   80762 cri.go:89] found id: ""
	I0612 21:42:11.870120   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.870131   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:11.870138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:11.870201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:11.907862   80762 cri.go:89] found id: ""
	I0612 21:42:11.907893   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.907905   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:11.907913   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:11.907973   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:11.947773   80762 cri.go:89] found id: ""
	I0612 21:42:11.947798   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.947808   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:11.947816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:11.947876   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:11.987806   80762 cri.go:89] found id: ""
	I0612 21:42:11.987837   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.987848   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:11.987859   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:11.987878   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:12.043451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:12.043481   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:12.057946   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:12.057980   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:12.134265   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:12.134298   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:12.134310   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:12.221276   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:12.221315   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.165305   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.165846   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.012728   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512292   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512327   80243 pod_ready.go:81] duration metric: took 4m0.006424182s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:13.512336   80243 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0612 21:42:13.512343   80243 pod_ready.go:38] duration metric: took 4m5.595554955s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:13.512359   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:42:13.512383   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:13.512428   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:13.571855   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:13.571882   80243 cri.go:89] found id: ""
	I0612 21:42:13.571892   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:13.571942   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.576505   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:13.576557   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:13.614768   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:13.614792   80243 cri.go:89] found id: ""
	I0612 21:42:13.614799   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:13.614847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.619276   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:13.619342   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:13.662832   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:13.662856   80243 cri.go:89] found id: ""
	I0612 21:42:13.662866   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:13.662931   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.667956   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:13.668031   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:13.710456   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.710479   80243 cri.go:89] found id: ""
	I0612 21:42:13.710487   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:13.710540   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.715411   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:13.715480   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:13.759924   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.759952   80243 cri.go:89] found id: ""
	I0612 21:42:13.759965   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:13.760027   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.764854   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:13.764919   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:13.804802   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:13.804826   80243 cri.go:89] found id: ""
	I0612 21:42:13.804834   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:13.804891   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.809421   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:13.809489   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:13.846580   80243 cri.go:89] found id: ""
	I0612 21:42:13.846615   80243 logs.go:276] 0 containers: []
	W0612 21:42:13.846625   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:13.846633   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:13.846685   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:13.893480   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.893504   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:13.893510   80243 cri.go:89] found id: ""
	I0612 21:42:13.893523   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:13.893571   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.898530   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.905072   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:13.905100   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.942165   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:13.942195   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.981852   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:13.981882   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:14.018431   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:14.018457   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:14.067616   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:14.067645   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:14.082853   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:14.082886   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:14.220156   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:14.220188   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:14.274395   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:14.274430   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:14.319087   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:14.319116   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:14.834792   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:14.834831   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:14.893213   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:14.893252   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:14.957423   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:14.957466   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:15.013756   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:15.013803   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.318558   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:15.318904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:14.760949   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:14.775242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:14.775356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:14.818818   80762 cri.go:89] found id: ""
	I0612 21:42:14.818847   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.818856   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:14.818863   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:14.818931   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:14.859106   80762 cri.go:89] found id: ""
	I0612 21:42:14.859146   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.859157   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:14.859164   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:14.859247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:14.894993   80762 cri.go:89] found id: ""
	I0612 21:42:14.895016   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.895026   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:14.895037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:14.895087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:14.943534   80762 cri.go:89] found id: ""
	I0612 21:42:14.943561   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.943572   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:14.943579   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:14.943645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:14.985243   80762 cri.go:89] found id: ""
	I0612 21:42:14.985267   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.985274   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:14.985280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:14.985328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:15.029253   80762 cri.go:89] found id: ""
	I0612 21:42:15.029286   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.029297   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:15.029305   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:15.029371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:15.063471   80762 cri.go:89] found id: ""
	I0612 21:42:15.063499   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.063510   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:15.063517   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:15.063580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:15.101152   80762 cri.go:89] found id: ""
	I0612 21:42:15.101181   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.101201   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:15.101212   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:15.101227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:15.178398   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:15.178416   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:15.178429   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:15.255420   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:15.255468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:15.295492   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:15.295519   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:15.345010   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:15.345051   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:15.166546   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.666141   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:19.672626   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.561453   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.579672   80243 api_server.go:72] duration metric: took 4m17.376220984s to wait for apiserver process to appear ...
	I0612 21:42:17.579702   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:42:17.579741   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.579787   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.620290   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:17.620318   80243 cri.go:89] found id: ""
	I0612 21:42:17.620325   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:17.620387   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.624598   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.624658   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.665957   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:17.665985   80243 cri.go:89] found id: ""
	I0612 21:42:17.665995   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:17.666056   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.671143   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.671222   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:17.717377   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.717396   80243 cri.go:89] found id: ""
	I0612 21:42:17.717404   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:17.717459   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.721710   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:17.721774   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:17.762712   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:17.762739   80243 cri.go:89] found id: ""
	I0612 21:42:17.762749   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:17.762807   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.767258   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:17.767329   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:17.803905   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:17.803956   80243 cri.go:89] found id: ""
	I0612 21:42:17.803969   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:17.804034   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.808260   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:17.808323   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:17.847402   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:17.847432   80243 cri.go:89] found id: ""
	I0612 21:42:17.847441   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:17.847502   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.851685   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:17.851757   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:17.897026   80243 cri.go:89] found id: ""
	I0612 21:42:17.897051   80243 logs.go:276] 0 containers: []
	W0612 21:42:17.897059   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:17.897065   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:17.897122   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:17.953764   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:17.953793   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:17.953799   80243 cri.go:89] found id: ""
	I0612 21:42:17.953808   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:17.953875   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.959822   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.965103   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:17.965127   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:18.089205   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:18.089229   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:18.153823   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:18.153876   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:18.198010   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.198037   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.255380   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.255410   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.271692   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:18.271725   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:18.318018   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:18.318049   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:18.379352   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:18.379386   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:18.437854   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:18.437884   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:18.487618   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.487650   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.934735   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.934784   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:18.983010   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:18.983050   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:19.043569   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:19.043605   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.819422   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:20.315423   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.862640   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.879256   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.879333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.918910   80762 cri.go:89] found id: ""
	I0612 21:42:17.918940   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.918951   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:17.918958   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.919018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.959701   80762 cri.go:89] found id: ""
	I0612 21:42:17.959726   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.959734   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:17.959739   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.959820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:18.005096   80762 cri.go:89] found id: ""
	I0612 21:42:18.005125   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.005142   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:18.005150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:18.005211   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:18.046877   80762 cri.go:89] found id: ""
	I0612 21:42:18.046907   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.046919   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:18.046927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:18.046992   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:18.087907   80762 cri.go:89] found id: ""
	I0612 21:42:18.087934   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.087946   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:18.087953   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:18.088016   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:18.139423   80762 cri.go:89] found id: ""
	I0612 21:42:18.139453   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.139464   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:18.139473   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:18.139535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:18.180433   80762 cri.go:89] found id: ""
	I0612 21:42:18.180459   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.180469   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:18.180476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:18.180537   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:18.220966   80762 cri.go:89] found id: ""
	I0612 21:42:18.220996   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.221005   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:18.221015   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.221033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.276006   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.276031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.290975   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:18.291026   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:18.369318   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:18.369345   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.369359   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.451336   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.451380   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.016353   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:21.030544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.030611   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.072558   80762 cri.go:89] found id: ""
	I0612 21:42:21.072583   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.072591   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:21.072596   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.072649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.106320   80762 cri.go:89] found id: ""
	I0612 21:42:21.106352   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.106364   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:21.106372   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.106431   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.139155   80762 cri.go:89] found id: ""
	I0612 21:42:21.139201   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.139212   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:21.139220   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.139285   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.178731   80762 cri.go:89] found id: ""
	I0612 21:42:21.178762   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.178772   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:21.178779   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.178838   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.213606   80762 cri.go:89] found id: ""
	I0612 21:42:21.213635   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.213645   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:21.213652   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.213707   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.250966   80762 cri.go:89] found id: ""
	I0612 21:42:21.250993   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.251009   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:21.251017   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.251084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.289434   80762 cri.go:89] found id: ""
	I0612 21:42:21.289457   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.289465   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.289474   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:21.289520   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:21.329028   80762 cri.go:89] found id: ""
	I0612 21:42:21.329058   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.329069   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:21.329080   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:21.329098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:21.342621   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:21.342648   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:21.418742   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:21.418766   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:21.418779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:21.493909   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:21.493944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.534693   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:21.534723   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.165337   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.166122   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:21.581443   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:42:21.586756   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:42:21.587670   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:42:21.587691   80243 api_server.go:131] duration metric: took 4.007982669s to wait for apiserver health ...
	I0612 21:42:21.587699   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:42:21.587720   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.587761   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.627942   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:21.627965   80243 cri.go:89] found id: ""
	I0612 21:42:21.627974   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:21.628036   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.632308   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.632380   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.674453   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:21.674474   80243 cri.go:89] found id: ""
	I0612 21:42:21.674482   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:21.674539   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.679303   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.679376   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.717454   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:21.717483   80243 cri.go:89] found id: ""
	I0612 21:42:21.717492   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:21.717555   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.722113   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.722176   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.758752   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:21.758780   80243 cri.go:89] found id: ""
	I0612 21:42:21.758790   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:21.758847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.763397   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.763465   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.802552   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:21.802574   80243 cri.go:89] found id: ""
	I0612 21:42:21.802583   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:21.802641   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.807570   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.807633   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.855093   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:21.855118   80243 cri.go:89] found id: ""
	I0612 21:42:21.855128   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:21.855212   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.860163   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.860231   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.907934   80243 cri.go:89] found id: ""
	I0612 21:42:21.907960   80243 logs.go:276] 0 containers: []
	W0612 21:42:21.907969   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.907977   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:21.908046   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:21.950085   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:21.950114   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:21.950120   80243 cri.go:89] found id: ""
	I0612 21:42:21.950128   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:21.950186   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.955633   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.960017   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:21.960038   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:22.015659   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:22.015708   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:22.074063   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:22.074093   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:22.113545   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:22.113581   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:22.152550   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:22.152583   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:22.556816   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:22.556856   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:22.602506   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:22.602542   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.655545   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:22.655577   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:22.775731   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:22.775775   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:22.827447   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:22.827476   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:22.864866   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:22.864898   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:22.903885   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:22.903912   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:22.947166   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:22.947214   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:25.472711   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:42:25.472743   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.472750   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.472755   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.472761   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.472765   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.472770   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.472777   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.472783   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.472794   80243 system_pods.go:74] duration metric: took 3.885088008s to wait for pod list to return data ...
	I0612 21:42:25.472803   80243 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:42:25.475046   80243 default_sa.go:45] found service account: "default"
	I0612 21:42:25.475072   80243 default_sa.go:55] duration metric: took 2.260179ms for default service account to be created ...
	I0612 21:42:25.475082   80243 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:42:25.479903   80243 system_pods.go:86] 8 kube-system pods found
	I0612 21:42:25.479925   80243 system_pods.go:89] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.479931   80243 system_pods.go:89] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.479935   80243 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.479940   80243 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.479944   80243 system_pods.go:89] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.479950   80243 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.479959   80243 system_pods.go:89] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.479969   80243 system_pods.go:89] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.479979   80243 system_pods.go:126] duration metric: took 4.890624ms to wait for k8s-apps to be running ...
	I0612 21:42:25.479990   80243 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:42:25.480037   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:25.496529   80243 system_svc.go:56] duration metric: took 16.534285ms WaitForService to wait for kubelet
	I0612 21:42:25.496549   80243 kubeadm.go:576] duration metric: took 4m25.293104149s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:42:25.496565   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:42:25.499277   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:42:25.499293   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:42:25.499304   80243 node_conditions.go:105] duration metric: took 2.734965ms to run NodePressure ...
	I0612 21:42:25.499314   80243 start.go:240] waiting for startup goroutines ...
	I0612 21:42:25.499320   80243 start.go:245] waiting for cluster config update ...
	I0612 21:42:25.499339   80243 start.go:254] writing updated cluster config ...
	I0612 21:42:25.499628   80243 ssh_runner.go:195] Run: rm -f paused
	I0612 21:42:25.547780   80243 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:42:25.549693   80243 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-376087" cluster and "default" namespace by default
	I0612 21:42:22.317078   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.317826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:26.818102   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.086466   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:24.101820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:24.101877   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:24.145732   80762 cri.go:89] found id: ""
	I0612 21:42:24.145757   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.145767   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:24.145774   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:24.145832   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:24.182765   80762 cri.go:89] found id: ""
	I0612 21:42:24.182788   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.182795   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:24.182801   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:24.182889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:24.235093   80762 cri.go:89] found id: ""
	I0612 21:42:24.235121   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.235129   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:24.235134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:24.235208   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:24.269788   80762 cri.go:89] found id: ""
	I0612 21:42:24.269809   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.269816   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:24.269822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:24.269867   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:24.306594   80762 cri.go:89] found id: ""
	I0612 21:42:24.306620   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.306628   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:24.306634   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:24.306693   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:24.343766   80762 cri.go:89] found id: ""
	I0612 21:42:24.343786   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.343795   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:24.343802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:24.343858   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:24.384417   80762 cri.go:89] found id: ""
	I0612 21:42:24.384447   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.384457   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:24.384464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:24.384524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:24.424935   80762 cri.go:89] found id: ""
	I0612 21:42:24.424958   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.424965   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:24.424974   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:24.424988   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:24.499737   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:24.499771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:24.537631   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:24.537667   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:24.593743   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:24.593779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:24.608078   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:24.608107   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:24.679729   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.180828   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:27.195484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:27.195552   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:27.235725   80762 cri.go:89] found id: ""
	I0612 21:42:27.235750   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.235761   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:27.235768   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:27.235816   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:27.279763   80762 cri.go:89] found id: ""
	I0612 21:42:27.279795   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.279806   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:27.279814   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:27.279875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:27.320510   80762 cri.go:89] found id: ""
	I0612 21:42:27.320534   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.320543   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:27.320554   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:27.320641   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:27.359195   80762 cri.go:89] found id: ""
	I0612 21:42:27.359227   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.359239   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:27.359247   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:27.359312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:27.394977   80762 cri.go:89] found id: ""
	I0612 21:42:27.395004   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.395015   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:27.395033   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:27.395099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:27.431905   80762 cri.go:89] found id: ""
	I0612 21:42:27.431925   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.431933   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:27.431945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:27.431990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:27.469929   80762 cri.go:89] found id: ""
	I0612 21:42:27.469954   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.469961   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:27.469967   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:27.470024   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:27.505128   80762 cri.go:89] found id: ""
	I0612 21:42:27.505153   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.505160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:27.505169   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:27.505180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:27.556739   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:27.556771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:27.572730   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:27.572757   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:27.646797   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.646819   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:27.646836   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:27.726554   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:27.726599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:26.665496   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.166323   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.316302   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:31.316334   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:30.268770   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:30.282575   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:30.282635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:30.321243   80762 cri.go:89] found id: ""
	I0612 21:42:30.321276   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.321288   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:30.321295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:30.321342   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:30.359403   80762 cri.go:89] found id: ""
	I0612 21:42:30.359432   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.359443   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:30.359451   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:30.359505   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:30.395967   80762 cri.go:89] found id: ""
	I0612 21:42:30.396006   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.396015   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:30.396028   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:30.396087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:30.438093   80762 cri.go:89] found id: ""
	I0612 21:42:30.438123   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.438132   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:30.438138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:30.438192   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:30.476859   80762 cri.go:89] found id: ""
	I0612 21:42:30.476888   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.476898   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:30.476905   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:30.476968   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:30.512998   80762 cri.go:89] found id: ""
	I0612 21:42:30.513026   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.513037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:30.513045   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:30.513106   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:30.548822   80762 cri.go:89] found id: ""
	I0612 21:42:30.548847   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.548855   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:30.548861   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:30.548908   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:30.584385   80762 cri.go:89] found id: ""
	I0612 21:42:30.584417   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.584426   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:30.584439   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:30.584454   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:30.685995   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:30.686019   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:30.686030   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:30.778789   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:30.778827   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:30.819467   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:30.819511   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:30.872563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:30.872599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:31.659828   80404 pod_ready.go:81] duration metric: took 4m0.000909177s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:31.659857   80404 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:42:31.659875   80404 pod_ready.go:38] duration metric: took 4m13.021158077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:31.659904   80404 kubeadm.go:591] duration metric: took 4m20.257268424s to restartPrimaryControlPlane
	W0612 21:42:31.659968   80404 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:31.660002   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:33.316457   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:35.316525   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:33.387831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:33.401663   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:33.401740   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:33.439690   80762 cri.go:89] found id: ""
	I0612 21:42:33.439723   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.439735   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:33.439743   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:33.439805   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:33.480330   80762 cri.go:89] found id: ""
	I0612 21:42:33.480357   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.480365   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:33.480371   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:33.480422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:33.520367   80762 cri.go:89] found id: ""
	I0612 21:42:33.520396   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.520407   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:33.520415   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:33.520476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:33.556859   80762 cri.go:89] found id: ""
	I0612 21:42:33.556892   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.556904   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:33.556911   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:33.556963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:33.595982   80762 cri.go:89] found id: ""
	I0612 21:42:33.596014   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.596024   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:33.596030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:33.596091   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:33.630942   80762 cri.go:89] found id: ""
	I0612 21:42:33.630974   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.630986   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:33.630994   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:33.631055   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:33.671649   80762 cri.go:89] found id: ""
	I0612 21:42:33.671676   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.671684   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:33.671690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:33.671734   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:33.716664   80762 cri.go:89] found id: ""
	I0612 21:42:33.716690   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.716700   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:33.716711   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:33.716726   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:33.734168   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:33.734198   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:33.826469   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:33.826491   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:33.826507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:33.915109   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:33.915142   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:33.957969   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:33.958007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:36.515258   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:36.529603   80762 kubeadm.go:591] duration metric: took 4m4.234271001s to restartPrimaryControlPlane
	W0612 21:42:36.529688   80762 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:36.529719   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:37.316720   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:39.317633   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.816783   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.545629   80762 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.01588354s)
	I0612 21:42:41.545734   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:41.561025   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:42:41.572788   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:42:41.583027   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:42:41.583052   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:42:41.583095   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:42:41.593433   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:42:41.593502   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:42:41.603944   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:42:41.613382   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:42:41.613432   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:42:41.622874   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.632270   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:42:41.632370   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.642072   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:42:41.652120   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:42:41.652194   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:42:41.662684   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:42:41.894903   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:42:43.817122   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:45.817164   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:47.817201   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:50.316134   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:52.317090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:54.318066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:56.816196   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:58.817948   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:01.316826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:03.728120   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.068094257s)
	I0612 21:43:03.728183   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:03.744990   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:03.755365   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:03.765154   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:03.765181   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:03.765226   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:03.775246   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:03.775304   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:03.785389   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:03.794999   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:03.795046   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:03.804771   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.814137   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:03.814187   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.824449   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:03.833631   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:03.833687   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:03.843203   80404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:03.895827   80404 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:03.895927   80404 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:04.040495   80404 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:04.040666   80404 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:04.040822   80404 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:04.252894   80404 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:04.254835   80404 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:04.254952   80404 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:04.255060   80404 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:04.255219   80404 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:04.255296   80404 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:04.255399   80404 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:04.255490   80404 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:04.255589   80404 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:04.255692   80404 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:04.255794   80404 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:04.255885   80404 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:04.255923   80404 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:04.255978   80404 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:04.460505   80404 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:04.640215   80404 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:04.722455   80404 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:04.862670   80404 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:05.112478   80404 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:05.113163   80404 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:05.115573   80404 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:03.817386   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:06.317207   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:05.117650   80404 out.go:204]   - Booting up control plane ...
	I0612 21:43:05.117758   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:05.117887   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:05.119410   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:05.139412   80404 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:05.139504   80404 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:05.139575   80404 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:05.268539   80404 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:05.268636   80404 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:43:05.771267   80404 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.898809ms
	I0612 21:43:05.771364   80404 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:43:11.274484   80404 kubeadm.go:309] [api-check] The API server is healthy after 5.503111655s
	I0612 21:43:11.291273   80404 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:43:11.319349   80404 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:43:11.357447   80404 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:43:11.357709   80404 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-591460 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:43:11.368936   80404 kubeadm.go:309] [bootstrap-token] Using token: 0iiegq.ujvrnknfmyshffxu
	I0612 21:43:08.816875   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:10.817031   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:11.370411   80404 out.go:204]   - Configuring RBAC rules ...
	I0612 21:43:11.370567   80404 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:43:11.375891   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:43:11.388345   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:43:11.392726   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:43:11.396867   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:43:11.401212   80404 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:43:11.683506   80404 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:43:12.114832   80404 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:43:12.683696   80404 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:43:12.683724   80404 kubeadm.go:309] 
	I0612 21:43:12.683811   80404 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:43:12.683823   80404 kubeadm.go:309] 
	I0612 21:43:12.683938   80404 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:43:12.683958   80404 kubeadm.go:309] 
	I0612 21:43:12.684002   80404 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:43:12.684070   80404 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:43:12.684129   80404 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:43:12.684146   80404 kubeadm.go:309] 
	I0612 21:43:12.684232   80404 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:43:12.684247   80404 kubeadm.go:309] 
	I0612 21:43:12.684317   80404 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:43:12.684330   80404 kubeadm.go:309] 
	I0612 21:43:12.684398   80404 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:43:12.684502   80404 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:43:12.684595   80404 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:43:12.684604   80404 kubeadm.go:309] 
	I0612 21:43:12.684700   80404 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:43:12.684807   80404 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:43:12.684816   80404 kubeadm.go:309] 
	I0612 21:43:12.684915   80404 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685061   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:43:12.685105   80404 kubeadm.go:309] 	--control-plane 
	I0612 21:43:12.685116   80404 kubeadm.go:309] 
	I0612 21:43:12.685237   80404 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:43:12.685248   80404 kubeadm.go:309] 
	I0612 21:43:12.685352   80404 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685509   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:43:12.685622   80404 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:43:12.685831   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:43:12.685848   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:43:12.687835   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:43:12.689100   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:43:12.700384   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:43:12.720228   80404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:43:12.720305   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.720330   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-591460 minikube.k8s.io/updated_at=2024_06_12T21_43_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=embed-certs-591460 minikube.k8s.io/primary=true
	I0612 21:43:12.751866   80404 ops.go:34] apiserver oom_adj: -16
	I0612 21:43:12.927644   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.428393   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.928221   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:14.428286   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.817125   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:15.316899   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:14.928273   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.428431   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.927968   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.428202   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.927882   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.428544   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.927844   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.428385   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.928105   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:19.428421   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.317080   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.317419   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:21.816670   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.928638   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.428310   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.928565   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.428377   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.928158   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.428356   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.927863   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.427955   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.928226   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.427823   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.928404   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.428367   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.514417   80404 kubeadm.go:1107] duration metric: took 12.794169259s to wait for elevateKubeSystemPrivileges
	W0612 21:43:25.514460   80404 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:43:25.514470   80404 kubeadm.go:393] duration metric: took 5m14.162212832s to StartCluster
	I0612 21:43:25.514490   80404 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.514576   80404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:43:25.518597   80404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.518811   80404 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:43:25.520571   80404 out.go:177] * Verifying Kubernetes components...
	I0612 21:43:25.518903   80404 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:43:25.519030   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:43:25.521967   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:43:25.522001   80404 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-591460"
	I0612 21:43:25.522043   80404 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-591460"
	W0612 21:43:25.522056   80404 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:43:25.522053   80404 addons.go:69] Setting default-storageclass=true in profile "embed-certs-591460"
	I0612 21:43:25.522089   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522100   80404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-591460"
	I0612 21:43:25.522089   80404 addons.go:69] Setting metrics-server=true in profile "embed-certs-591460"
	I0612 21:43:25.522158   80404 addons.go:234] Setting addon metrics-server=true in "embed-certs-591460"
	W0612 21:43:25.522170   80404 addons.go:243] addon metrics-server should already be in state true
	I0612 21:43:25.522196   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522502   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522509   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522532   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522535   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522585   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522611   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.538989   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0612 21:43:25.539032   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0612 21:43:25.539591   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.539592   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.540199   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540222   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540293   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540323   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540610   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.540736   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.541265   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541281   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541312   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.541431   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.542393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0612 21:43:25.543042   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.543604   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.543643   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.543997   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.544208   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.547823   80404 addons.go:234] Setting addon default-storageclass=true in "embed-certs-591460"
	W0612 21:43:25.547849   80404 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:43:25.547878   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.548237   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.548272   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.558486   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46589
	I0612 21:43:25.558934   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.559936   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.559967   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.560387   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.560600   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.560728   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0612 21:43:25.561116   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.561595   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.561610   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.561928   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.562198   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.562832   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565065   80404 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:43:25.563946   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0612 21:43:25.566521   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:43:25.566535   80404 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:43:25.566582   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.568114   80404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:43:24.316660   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:25.810857   80157 pod_ready.go:81] duration metric: took 4m0.000926725s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	E0612 21:43:25.810888   80157 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:43:25.810936   80157 pod_ready.go:38] duration metric: took 4m14.539121336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.810971   80157 kubeadm.go:591] duration metric: took 4m21.56451584s to restartPrimaryControlPlane
	W0612 21:43:25.811042   80157 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:43:25.811074   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:43:25.567032   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.569772   80404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:25.569794   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:43:25.569812   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.570271   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.570291   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.570363   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.570698   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.571498   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.571514   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.571539   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.571691   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.571861   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.572032   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.572851   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.572894   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.573962   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574403   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.574429   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574762   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.574974   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.575164   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.575464   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.589637   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0612 21:43:25.590155   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.591035   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.591059   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.591596   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.591845   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.593885   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.594095   80404 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:25.594112   80404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:43:25.594131   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.597769   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.598379   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598434   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.598635   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.598766   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.598860   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.762134   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:43:25.818663   80404 node_ready.go:35] waiting up to 6m0s for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830753   80404 node_ready.go:49] node "embed-certs-591460" has status "Ready":"True"
	I0612 21:43:25.830780   80404 node_ready.go:38] duration metric: took 12.086962ms for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830792   80404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.841084   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:25.929395   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:43:25.929427   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:43:26.001489   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:26.016234   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:43:26.016275   80404 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:43:26.030851   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:26.062707   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:26.062741   80404 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:43:26.157461   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:27.281342   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.279809959s)
	I0612 21:43:27.281364   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.250478112s)
	I0612 21:43:27.281392   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281405   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281408   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281420   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281712   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281730   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281739   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281748   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281861   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281916   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281933   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281942   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.283567   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283582   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283592   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283597   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.283728   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283740   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.324600   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.324625   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.324937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.324941   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.324965   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.366096   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:27.366126   80404 pod_ready.go:81] duration metric: took 1.52501871s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.366139   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.530900   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.373391416s)
	I0612 21:43:27.530973   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.530987   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.531382   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.531399   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.531406   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.531419   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.531428   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.533199   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.533212   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.533226   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.533238   80404 addons.go:475] Verifying addon metrics-server=true in "embed-certs-591460"
	I0612 21:43:27.534895   80404 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:43:27.536129   80404 addons.go:510] duration metric: took 2.017228253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:43:28.373835   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.373862   80404 pod_ready.go:81] duration metric: took 1.007715736s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.373870   80404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379042   80404 pod_ready.go:92] pod "etcd-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.379065   80404 pod_ready.go:81] duration metric: took 5.188395ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379078   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384218   80404 pod_ready.go:92] pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.384233   80404 pod_ready.go:81] duration metric: took 5.148944ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384241   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389023   80404 pod_ready.go:92] pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.389046   80404 pod_ready.go:81] duration metric: took 4.78947ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389056   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623880   80404 pod_ready.go:92] pod "kube-proxy-5l2wz" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.623902   80404 pod_ready.go:81] duration metric: took 234.83854ms for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623910   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022477   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:29.022508   80404 pod_ready.go:81] duration metric: took 398.590821ms for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022522   80404 pod_ready.go:38] duration metric: took 3.191712664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:29.022539   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:43:29.022602   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:43:29.038776   80404 api_server.go:72] duration metric: took 3.51993276s to wait for apiserver process to appear ...
	I0612 21:43:29.038805   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:43:29.038827   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:43:29.045455   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:43:29.047050   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:43:29.047072   80404 api_server.go:131] duration metric: took 8.260077ms to wait for apiserver health ...
	I0612 21:43:29.047080   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:43:29.226569   80404 system_pods.go:59] 9 kube-system pods found
	I0612 21:43:29.226603   80404 system_pods.go:61] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.226611   80404 system_pods.go:61] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.226618   80404 system_pods.go:61] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.226624   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.226631   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.226636   80404 system_pods.go:61] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.226642   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.226652   80404 system_pods.go:61] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.226659   80404 system_pods.go:61] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.226671   80404 system_pods.go:74] duration metric: took 179.583899ms to wait for pod list to return data ...
	I0612 21:43:29.226684   80404 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:43:29.422244   80404 default_sa.go:45] found service account: "default"
	I0612 21:43:29.422278   80404 default_sa.go:55] duration metric: took 195.585835ms for default service account to be created ...
	I0612 21:43:29.422290   80404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:43:29.626614   80404 system_pods.go:86] 9 kube-system pods found
	I0612 21:43:29.626650   80404 system_pods.go:89] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.626659   80404 system_pods.go:89] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.626667   80404 system_pods.go:89] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.626673   80404 system_pods.go:89] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.626680   80404 system_pods.go:89] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.626687   80404 system_pods.go:89] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.626693   80404 system_pods.go:89] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.626703   80404 system_pods.go:89] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.626714   80404 system_pods.go:89] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.626725   80404 system_pods.go:126] duration metric: took 204.428087ms to wait for k8s-apps to be running ...
	I0612 21:43:29.626737   80404 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:43:29.626793   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:29.642423   80404 system_svc.go:56] duration metric: took 15.67694ms WaitForService to wait for kubelet
	I0612 21:43:29.642457   80404 kubeadm.go:576] duration metric: took 4.123619864s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:43:29.642481   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:43:29.825804   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:43:29.825833   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:43:29.825846   80404 node_conditions.go:105] duration metric: took 183.359091ms to run NodePressure ...
	I0612 21:43:29.825860   80404 start.go:240] waiting for startup goroutines ...
	I0612 21:43:29.825868   80404 start.go:245] waiting for cluster config update ...
	I0612 21:43:29.825881   80404 start.go:254] writing updated cluster config ...
	I0612 21:43:29.826229   80404 ssh_runner.go:195] Run: rm -f paused
	I0612 21:43:29.878580   80404 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:43:29.880438   80404 out.go:177] * Done! kubectl is now configured to use "embed-certs-591460" cluster and "default" namespace by default
	I0612 21:43:57.924825   80157 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.113719509s)
	I0612 21:43:57.924912   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:57.942507   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:57.953901   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:57.964374   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:57.964396   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:57.964439   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:57.974281   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:57.974366   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:57.985000   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:57.995268   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:57.995346   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:58.005482   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.015598   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:58.015659   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.028582   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:58.038706   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:58.038756   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:58.051818   80157 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:58.110576   80157 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:58.110645   80157 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:58.274454   80157 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:58.274625   80157 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:58.274751   80157 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:58.484837   80157 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:58.486643   80157 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:58.486753   80157 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:58.486845   80157 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:58.486963   80157 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:58.487058   80157 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:58.487192   80157 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:58.487283   80157 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:58.487368   80157 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:58.487452   80157 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:58.487559   80157 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:58.487653   80157 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:58.487728   80157 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:58.487826   80157 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:58.644916   80157 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:58.789369   80157 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:58.924153   80157 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:59.044332   80157 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:59.352910   80157 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:59.353462   80157 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:59.356967   80157 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:59.359470   80157 out.go:204]   - Booting up control plane ...
	I0612 21:43:59.359596   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:59.359687   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:59.359792   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:59.378280   80157 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:59.379149   80157 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:59.379240   80157 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:59.521694   80157 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:59.521775   80157 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:44:00.036696   80157 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 514.972931ms
	I0612 21:44:00.036836   80157 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:44:05.539363   80157 kubeadm.go:309] [api-check] The API server is healthy after 5.502859715s
	I0612 21:44:05.552779   80157 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:44:05.567296   80157 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:44:05.603398   80157 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:44:05.603707   80157 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-087875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:44:05.619311   80157 kubeadm.go:309] [bootstrap-token] Using token: x2knjj.1kuv2wdowwsbztfg
	I0612 21:44:05.621026   80157 out.go:204]   - Configuring RBAC rules ...
	I0612 21:44:05.621180   80157 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:44:05.628474   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:44:05.642438   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:44:05.647606   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:44:05.651982   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:44:05.656129   80157 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:44:05.947680   80157 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:44:06.430716   80157 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:44:06.950446   80157 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:44:06.951688   80157 kubeadm.go:309] 
	I0612 21:44:06.951771   80157 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:44:06.951782   80157 kubeadm.go:309] 
	I0612 21:44:06.951857   80157 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:44:06.951866   80157 kubeadm.go:309] 
	I0612 21:44:06.951919   80157 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:44:06.952007   80157 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:44:06.952083   80157 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:44:06.952094   80157 kubeadm.go:309] 
	I0612 21:44:06.952160   80157 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:44:06.952172   80157 kubeadm.go:309] 
	I0612 21:44:06.952222   80157 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:44:06.952232   80157 kubeadm.go:309] 
	I0612 21:44:06.952285   80157 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:44:06.952375   80157 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:44:06.952460   80157 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:44:06.952476   80157 kubeadm.go:309] 
	I0612 21:44:06.952612   80157 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:44:06.952711   80157 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:44:06.952722   80157 kubeadm.go:309] 
	I0612 21:44:06.952819   80157 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.952933   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:44:06.952963   80157 kubeadm.go:309] 	--control-plane 
	I0612 21:44:06.952985   80157 kubeadm.go:309] 
	I0612 21:44:06.953100   80157 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:44:06.953114   80157 kubeadm.go:309] 
	I0612 21:44:06.953219   80157 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.953373   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:44:06.953943   80157 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:44:06.953986   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:44:06.954003   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:44:06.956587   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:44:06.957989   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:44:06.972666   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:44:07.000720   80157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:44:07.000822   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.000839   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-087875 minikube.k8s.io/updated_at=2024_06_12T21_44_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=no-preload-087875 minikube.k8s.io/primary=true
	I0612 21:44:07.201613   80157 ops.go:34] apiserver oom_adj: -16
	I0612 21:44:07.201713   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.702791   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.201886   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.702020   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.202755   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.702683   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.202007   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.702272   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.201764   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.702383   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.201880   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.702587   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.202524   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.702498   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.202157   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.702197   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.201852   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.702444   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.201919   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.701722   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.202307   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.701823   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.202602   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.702354   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.202207   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.308654   80157 kubeadm.go:1107] duration metric: took 12.307897648s to wait for elevateKubeSystemPrivileges
	W0612 21:44:19.308699   80157 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:44:19.308709   80157 kubeadm.go:393] duration metric: took 5m15.118303799s to StartCluster
	I0612 21:44:19.308738   80157 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.308825   80157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:44:19.311295   80157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.311587   80157 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:44:19.313263   80157 out.go:177] * Verifying Kubernetes components...
	I0612 21:44:19.311693   80157 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:44:19.311780   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:44:19.315137   80157 addons.go:69] Setting storage-provisioner=true in profile "no-preload-087875"
	I0612 21:44:19.315148   80157 addons.go:69] Setting default-storageclass=true in profile "no-preload-087875"
	I0612 21:44:19.315192   80157 addons.go:234] Setting addon storage-provisioner=true in "no-preload-087875"
	I0612 21:44:19.315201   80157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-087875"
	I0612 21:44:19.315202   80157 addons.go:69] Setting metrics-server=true in profile "no-preload-087875"
	I0612 21:44:19.315240   80157 addons.go:234] Setting addon metrics-server=true in "no-preload-087875"
	W0612 21:44:19.315255   80157 addons.go:243] addon metrics-server should already be in state true
	I0612 21:44:19.315296   80157 host.go:66] Checking if "no-preload-087875" exists ...
	W0612 21:44:19.315209   80157 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:44:19.315397   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.315139   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:44:19.315636   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315666   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315653   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315698   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315731   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315750   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.331461   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40419
	I0612 21:44:19.331495   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0612 21:44:19.331924   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332019   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332446   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332466   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332580   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332603   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332866   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.332911   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.333087   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.333484   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.333508   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.334462   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0612 21:44:19.334922   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.335447   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.335474   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.335812   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.336376   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.336408   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.336657   80157 addons.go:234] Setting addon default-storageclass=true in "no-preload-087875"
	W0612 21:44:19.336675   80157 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:44:19.336701   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.337047   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.337078   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.350724   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0612 21:44:19.351308   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.351869   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.351897   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.352272   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.352503   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.354434   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0612 21:44:19.354532   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.356594   80157 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:44:19.354927   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.355284   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37489
	I0612 21:44:19.357181   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.358026   80157 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.357219   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358040   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:44:19.358048   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.358058   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.358407   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.358560   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358577   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.359024   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.359035   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.359069   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.359408   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.361013   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.361524   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.363337   80157 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:44:19.361921   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.362312   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.364713   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:44:19.364727   80157 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:44:19.364736   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.364744   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.365021   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.365260   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.365419   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.368572   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.368971   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.368988   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.369144   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.369316   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.369431   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.369538   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.377220   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0612 21:44:19.377598   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.378595   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.378621   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.378931   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.379127   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.380646   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.380844   80157 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.380857   80157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:44:19.380869   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.383763   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384201   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.384216   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384504   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.384660   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.384816   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.384956   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.516231   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:44:19.539205   80157 node_ready.go:35] waiting up to 6m0s for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546948   80157 node_ready.go:49] node "no-preload-087875" has status "Ready":"True"
	I0612 21:44:19.546972   80157 node_ready.go:38] duration metric: took 7.739123ms for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546985   80157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:19.553454   80157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562831   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.562854   80157 pod_ready.go:81] duration metric: took 9.377758ms for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562862   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568274   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.568296   80157 pod_ready.go:81] duration metric: took 5.425162ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568306   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.572960   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.572991   80157 pod_ready.go:81] duration metric: took 4.669828ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.573002   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.620522   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:44:19.620548   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:44:19.654325   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.681762   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:44:19.681800   80157 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:44:19.699701   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.774496   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:19.774526   80157 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:44:19.874891   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:20.590260   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590292   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590276   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590360   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590587   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.590634   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.590644   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.590651   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590658   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592402   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592462   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592410   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592411   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592414   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592551   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592476   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.592655   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592952   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.593069   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.593093   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.634339   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.634370   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.634813   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.634864   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.634880   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.321337   80157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.446394551s)
	I0612 21:44:21.321389   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.321403   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.321802   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:21.321827   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.321968   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322012   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.322023   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.322278   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.322294   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322305   80157 addons.go:475] Verifying addon metrics-server=true in "no-preload-087875"
	I0612 21:44:21.324652   80157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:44:21.326653   80157 addons.go:510] duration metric: took 2.01495884s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:44:21.589251   80157 pod_ready.go:92] pod "kube-proxy-lnhzt" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.589290   80157 pod_ready.go:81] duration metric: took 2.016278458s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.589305   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652083   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.652122   80157 pod_ready.go:81] duration metric: took 62.805318ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652136   80157 pod_ready.go:38] duration metric: took 2.105136343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:21.652156   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:44:21.652237   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:44:21.683110   80157 api_server.go:72] duration metric: took 2.371482611s to wait for apiserver process to appear ...
	I0612 21:44:21.683148   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:44:21.683187   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:44:21.704637   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:44:21.714032   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:44:21.714061   80157 api_server.go:131] duration metric: took 30.904631ms to wait for apiserver health ...
	I0612 21:44:21.714070   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:44:21.751484   80157 system_pods.go:59] 9 kube-system pods found
	I0612 21:44:21.751520   80157 system_pods.go:61] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:21.751526   80157 system_pods.go:61] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:21.751532   80157 system_pods.go:61] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:21.751537   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:21.751544   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:21.751548   80157 system_pods.go:61] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:21.751552   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:21.751560   80157 system_pods.go:61] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:21.751568   80157 system_pods.go:61] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:21.751581   80157 system_pods.go:74] duration metric: took 37.503399ms to wait for pod list to return data ...
	I0612 21:44:21.751595   80157 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:44:21.943440   80157 default_sa.go:45] found service account: "default"
	I0612 21:44:21.943465   80157 default_sa.go:55] duration metric: took 191.863221ms for default service account to be created ...
	I0612 21:44:21.943473   80157 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:44:22.146922   80157 system_pods.go:86] 9 kube-system pods found
	I0612 21:44:22.146960   80157 system_pods.go:89] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:22.146969   80157 system_pods.go:89] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:22.146975   80157 system_pods.go:89] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:22.146982   80157 system_pods.go:89] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:22.146988   80157 system_pods.go:89] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:22.146994   80157 system_pods.go:89] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:22.147000   80157 system_pods.go:89] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:22.147012   80157 system_pods.go:89] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:22.147030   80157 system_pods.go:89] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:22.147042   80157 system_pods.go:126] duration metric: took 203.562938ms to wait for k8s-apps to be running ...
	I0612 21:44:22.147056   80157 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:44:22.147110   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:22.167568   80157 system_svc.go:56] duration metric: took 20.500218ms WaitForService to wait for kubelet
	I0612 21:44:22.167606   80157 kubeadm.go:576] duration metric: took 2.855984791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:44:22.167627   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:44:22.343015   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:44:22.343039   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:44:22.343051   80157 node_conditions.go:105] duration metric: took 175.419211ms to run NodePressure ...
	I0612 21:44:22.343064   80157 start.go:240] waiting for startup goroutines ...
	I0612 21:44:22.343073   80157 start.go:245] waiting for cluster config update ...
	I0612 21:44:22.343085   80157 start.go:254] writing updated cluster config ...
	I0612 21:44:22.343387   80157 ssh_runner.go:195] Run: rm -f paused
	I0612 21:44:22.391092   80157 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:44:22.393268   80157 out.go:177] * Done! kubectl is now configured to use "no-preload-087875" cluster and "default" namespace by default
	I0612 21:44:37.700712   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:44:37.700862   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:44:37.702455   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:44:37.702552   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:44:37.702639   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:44:37.702749   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:44:37.702887   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:44:37.702992   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:44:37.704955   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:44:37.705032   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:44:37.705088   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:44:37.705159   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:44:37.705228   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:44:37.705289   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:44:37.705368   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:44:37.705467   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:44:37.705538   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:44:37.705620   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:44:37.705683   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:44:37.705723   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:44:37.705773   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:44:37.705816   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:44:37.705861   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:44:37.705917   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:44:37.705964   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:44:37.706062   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:44:37.706172   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:44:37.706231   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:44:37.706288   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:44:37.707753   80762 out.go:204]   - Booting up control plane ...
	I0612 21:44:37.707857   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:44:37.707931   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:44:37.707994   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:44:37.708064   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:44:37.708197   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:44:37.708251   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:44:37.708344   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708536   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708600   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708770   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708864   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709067   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709133   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709340   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709441   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709638   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709650   80762 kubeadm.go:309] 
	I0612 21:44:37.709683   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:44:37.709721   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:44:37.709728   80762 kubeadm.go:309] 
	I0612 21:44:37.709777   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:44:37.709817   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:44:37.709910   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:44:37.709917   80762 kubeadm.go:309] 
	I0612 21:44:37.710018   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:44:37.710052   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:44:37.710083   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:44:37.710089   80762 kubeadm.go:309] 
	I0612 21:44:37.710184   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:44:37.710259   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:44:37.710265   80762 kubeadm.go:309] 
	I0612 21:44:37.710359   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:44:37.710431   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:44:37.710497   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:44:37.710563   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:44:37.710607   80762 kubeadm.go:309] 
	W0612 21:44:37.710666   80762 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0612 21:44:37.710709   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:44:38.170461   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:38.186842   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:44:38.198380   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:44:38.198400   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:44:38.198454   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:44:38.208876   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:44:38.208948   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:44:38.219641   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:44:38.229622   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:44:38.229685   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:44:38.240153   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.251342   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:44:38.251401   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.262662   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:44:38.272898   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:44:38.272954   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:44:38.283213   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:44:38.501637   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:46:34.582636   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:46:34.582745   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:46:34.584702   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:46:34.584775   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:46:34.584898   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:46:34.585029   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:46:34.585172   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:46:34.585263   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:46:34.587030   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:46:34.587101   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:46:34.587160   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:46:34.587260   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:46:34.587349   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:46:34.587446   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:46:34.587521   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:46:34.587609   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:46:34.587697   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:46:34.587803   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:46:34.587886   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:46:34.588014   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:46:34.588097   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:46:34.588177   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:46:34.588268   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:46:34.588381   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:46:34.588447   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:46:34.588558   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:46:34.588659   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:46:34.588719   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:46:34.588816   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:46:34.590114   80762 out.go:204]   - Booting up control plane ...
	I0612 21:46:34.590226   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:46:34.590326   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:46:34.590444   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:46:34.590527   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:46:34.590710   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:46:34.590778   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:46:34.590847   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591054   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591149   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591411   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591508   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591743   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591846   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592108   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592205   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592395   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592403   80762 kubeadm.go:309] 
	I0612 21:46:34.592436   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:46:34.592485   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:46:34.592500   80762 kubeadm.go:309] 
	I0612 21:46:34.592535   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:46:34.592563   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:46:34.592677   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:46:34.592688   80762 kubeadm.go:309] 
	I0612 21:46:34.592820   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:46:34.592855   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:46:34.592883   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:46:34.592890   80762 kubeadm.go:309] 
	I0612 21:46:34.593007   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:46:34.593107   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:46:34.593116   80762 kubeadm.go:309] 
	I0612 21:46:34.593224   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:46:34.593342   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:46:34.593426   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:46:34.593494   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:46:34.593552   80762 kubeadm.go:393] duration metric: took 8m2.356271864s to StartCluster
	I0612 21:46:34.593558   80762 kubeadm.go:309] 
	I0612 21:46:34.593589   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:46:34.593639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:46:34.643842   80762 cri.go:89] found id: ""
	I0612 21:46:34.643876   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.643887   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:46:34.643905   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:46:34.643982   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:46:34.682878   80762 cri.go:89] found id: ""
	I0612 21:46:34.682899   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.682906   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:46:34.682912   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:46:34.682961   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:46:34.721931   80762 cri.go:89] found id: ""
	I0612 21:46:34.721955   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.721964   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:46:34.721969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:46:34.722021   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:46:34.759233   80762 cri.go:89] found id: ""
	I0612 21:46:34.759266   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.759274   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:46:34.759280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:46:34.759333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:46:34.800142   80762 cri.go:89] found id: ""
	I0612 21:46:34.800176   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.800186   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:46:34.800194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:46:34.800256   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:46:34.836746   80762 cri.go:89] found id: ""
	I0612 21:46:34.836774   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.836784   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:46:34.836791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:46:34.836850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:46:34.876108   80762 cri.go:89] found id: ""
	I0612 21:46:34.876138   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.876147   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:46:34.876153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:46:34.876202   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:46:34.912272   80762 cri.go:89] found id: ""
	I0612 21:46:34.912294   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.912301   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:46:34.912310   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:46:34.912324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:46:34.997300   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:46:34.997331   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:46:34.997347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:46:35.105602   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:46:35.105638   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:46:35.152818   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:46:35.152857   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:46:35.216504   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:46:35.216545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0612 21:46:35.239531   80762 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0612 21:46:35.239581   80762 out.go:239] * 
	W0612 21:46:35.239646   80762 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.239672   80762 out.go:239] * 
	W0612 21:46:35.240600   80762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:46:35.244822   80762 out.go:177] 
	W0612 21:46:35.246072   80762 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.246137   80762 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0612 21:46:35.246164   80762 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0612 21:46:35.247768   80762 out.go:177] 
	
	
	==> CRI-O <==
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.457090201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229204457059431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b7f3614-fca0-440d-9490-24e8308056a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.457699427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef8fbe25-662c-4ab5-9cfd-6ad4672ea5ec name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.457749613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef8fbe25-662c-4ab5-9cfd-6ad4672ea5ec name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.457929138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6d77b024431184651a9e21a458220d2924f4a46103d49a982b82d76487f2ff9,PodSandboxId:f1c342424d4fa0d74624f4863e382e82f1be44d9213f285877a9484b51438e18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228661367728599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90368fec-12d9-4baf-aef6-233691b5e99d,},Annotations:map[string]string{io.kubernetes.container.hash: ab3c8dcd,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f8a87fdb0e00f5579536445325d8b2dc0cfa37844f8747f40d5357afb8cf87,PodSandboxId:31aefcd0f0a8003d4a35aec62f9a43f1dee6afbdf0995d48b1e4a19a3b1f7924,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228661055629396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsvvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6c768b-75e2-4c11-99db-1103367ccc20,},Annotations:map[string]string{io.kubernetes.container.hash: d5ad641f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bffb5002753da23404659493ed47336a599fda15e4fc48a8f22aa2146c588e85,PodSandboxId:61080e3d2ddf2e6660c3547cfa897a3b97dc067ee9f372872611c4828b04403f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228660869483777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v75tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
48ba7d-8f66-4c31-ac14-3a38e18fa249,},Annotations:map[string]string{io.kubernetes.container.hash: 728d435d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d1ee15d2a35f909b263e8c592ac6c6bd5a01dc4c45e530fd0a24db98e8eb88,PodSandboxId:580e786b47f15d101e18d13a9631f43760251be9d0147f8cbfbee81d637ed2d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718228660368472103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf1156c-ba02-4551-aefa-66379b05e066,},Annotations:map[string]string{io.kubernetes.container.hash: fb7cf440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5253531d0c365ba7a37fe180563ed113f68906bd040776c09bb7aef9562ac80e,PodSandboxId:c4d2a14f93a7daa4c51ebced3fa88df7372518d23e18408aac8ce801f85a0b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228640704921735,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fd711c83c9b417403b6a9e31847398,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d4fb81f507b1127559b8713eadff985fc51dfd8b7106a3a0c8ea9f28b027fc,PodSandboxId:c00063a5386b0f11c81d8e99f5364d71d24daa6724b1361f5d69d6edbc7610e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228640714242866,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e866e8b2a2984f62db205dab7b3e4f,},Annotations:map[string]string{io.kubernetes.container.hash: f64610c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8bcdefdd9089db199dd6927625d23ce5553cc46a0949830ebce16e23e24bf,PodSandboxId:f04141bf9a6264c590d76ba434b0444355cf3b456d397b8081ad0ddb52d0ceca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228640701638256,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59e66dff9d6757e593577e4be5a7bcf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b712747a34d00d68d998ce34e9f775f0ddf3fc9d427853334fc3d043d9bd617d,PodSandboxId:70e422e682f35fdd17cdbdad8183e193a35ac883ebbbb1b1e21fa43e0f4505f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228640607697101,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656e6a3b53b4be584918cbaf50560652,},Annotations:map[string]string{io.kubernetes.container.hash: 995ac9bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef8fbe25-662c-4ab5-9cfd-6ad4672ea5ec name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.502861345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39e1c3a8-e59b-498c-96a8-dca4881e789b name=/runtime.v1.RuntimeService/Version
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.502930865Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39e1c3a8-e59b-498c-96a8-dca4881e789b name=/runtime.v1.RuntimeService/Version
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.504266293Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0c7faaa-ff72-4caf-9c4a-cf2d012815ed name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.504797775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229204504771952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0c7faaa-ff72-4caf-9c4a-cf2d012815ed name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.505430947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6801121e-bc99-4cb7-a7db-7d2918f966bd name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.505485120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6801121e-bc99-4cb7-a7db-7d2918f966bd name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.505735027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6d77b024431184651a9e21a458220d2924f4a46103d49a982b82d76487f2ff9,PodSandboxId:f1c342424d4fa0d74624f4863e382e82f1be44d9213f285877a9484b51438e18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228661367728599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90368fec-12d9-4baf-aef6-233691b5e99d,},Annotations:map[string]string{io.kubernetes.container.hash: ab3c8dcd,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f8a87fdb0e00f5579536445325d8b2dc0cfa37844f8747f40d5357afb8cf87,PodSandboxId:31aefcd0f0a8003d4a35aec62f9a43f1dee6afbdf0995d48b1e4a19a3b1f7924,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228661055629396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsvvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6c768b-75e2-4c11-99db-1103367ccc20,},Annotations:map[string]string{io.kubernetes.container.hash: d5ad641f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bffb5002753da23404659493ed47336a599fda15e4fc48a8f22aa2146c588e85,PodSandboxId:61080e3d2ddf2e6660c3547cfa897a3b97dc067ee9f372872611c4828b04403f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228660869483777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v75tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
48ba7d-8f66-4c31-ac14-3a38e18fa249,},Annotations:map[string]string{io.kubernetes.container.hash: 728d435d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d1ee15d2a35f909b263e8c592ac6c6bd5a01dc4c45e530fd0a24db98e8eb88,PodSandboxId:580e786b47f15d101e18d13a9631f43760251be9d0147f8cbfbee81d637ed2d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718228660368472103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf1156c-ba02-4551-aefa-66379b05e066,},Annotations:map[string]string{io.kubernetes.container.hash: fb7cf440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5253531d0c365ba7a37fe180563ed113f68906bd040776c09bb7aef9562ac80e,PodSandboxId:c4d2a14f93a7daa4c51ebced3fa88df7372518d23e18408aac8ce801f85a0b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228640704921735,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fd711c83c9b417403b6a9e31847398,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d4fb81f507b1127559b8713eadff985fc51dfd8b7106a3a0c8ea9f28b027fc,PodSandboxId:c00063a5386b0f11c81d8e99f5364d71d24daa6724b1361f5d69d6edbc7610e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228640714242866,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e866e8b2a2984f62db205dab7b3e4f,},Annotations:map[string]string{io.kubernetes.container.hash: f64610c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8bcdefdd9089db199dd6927625d23ce5553cc46a0949830ebce16e23e24bf,PodSandboxId:f04141bf9a6264c590d76ba434b0444355cf3b456d397b8081ad0ddb52d0ceca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228640701638256,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59e66dff9d6757e593577e4be5a7bcf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b712747a34d00d68d998ce34e9f775f0ddf3fc9d427853334fc3d043d9bd617d,PodSandboxId:70e422e682f35fdd17cdbdad8183e193a35ac883ebbbb1b1e21fa43e0f4505f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228640607697101,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656e6a3b53b4be584918cbaf50560652,},Annotations:map[string]string{io.kubernetes.container.hash: 995ac9bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6801121e-bc99-4cb7-a7db-7d2918f966bd name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.546327767Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41e247a4-6525-488d-9c97-df4295fbea89 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.546401801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41e247a4-6525-488d-9c97-df4295fbea89 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.547425656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a85c156-fa94-4848-85c1-8e49867505cb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.548381953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229204548315620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a85c156-fa94-4848-85c1-8e49867505cb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.549423341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1704722e-c890-4bcf-98cd-12c68d9a3030 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.549476629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1704722e-c890-4bcf-98cd-12c68d9a3030 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.549723810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6d77b024431184651a9e21a458220d2924f4a46103d49a982b82d76487f2ff9,PodSandboxId:f1c342424d4fa0d74624f4863e382e82f1be44d9213f285877a9484b51438e18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228661367728599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90368fec-12d9-4baf-aef6-233691b5e99d,},Annotations:map[string]string{io.kubernetes.container.hash: ab3c8dcd,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f8a87fdb0e00f5579536445325d8b2dc0cfa37844f8747f40d5357afb8cf87,PodSandboxId:31aefcd0f0a8003d4a35aec62f9a43f1dee6afbdf0995d48b1e4a19a3b1f7924,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228661055629396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsvvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6c768b-75e2-4c11-99db-1103367ccc20,},Annotations:map[string]string{io.kubernetes.container.hash: d5ad641f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bffb5002753da23404659493ed47336a599fda15e4fc48a8f22aa2146c588e85,PodSandboxId:61080e3d2ddf2e6660c3547cfa897a3b97dc067ee9f372872611c4828b04403f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228660869483777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v75tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
48ba7d-8f66-4c31-ac14-3a38e18fa249,},Annotations:map[string]string{io.kubernetes.container.hash: 728d435d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d1ee15d2a35f909b263e8c592ac6c6bd5a01dc4c45e530fd0a24db98e8eb88,PodSandboxId:580e786b47f15d101e18d13a9631f43760251be9d0147f8cbfbee81d637ed2d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718228660368472103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf1156c-ba02-4551-aefa-66379b05e066,},Annotations:map[string]string{io.kubernetes.container.hash: fb7cf440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5253531d0c365ba7a37fe180563ed113f68906bd040776c09bb7aef9562ac80e,PodSandboxId:c4d2a14f93a7daa4c51ebced3fa88df7372518d23e18408aac8ce801f85a0b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228640704921735,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fd711c83c9b417403b6a9e31847398,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d4fb81f507b1127559b8713eadff985fc51dfd8b7106a3a0c8ea9f28b027fc,PodSandboxId:c00063a5386b0f11c81d8e99f5364d71d24daa6724b1361f5d69d6edbc7610e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228640714242866,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e866e8b2a2984f62db205dab7b3e4f,},Annotations:map[string]string{io.kubernetes.container.hash: f64610c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8bcdefdd9089db199dd6927625d23ce5553cc46a0949830ebce16e23e24bf,PodSandboxId:f04141bf9a6264c590d76ba434b0444355cf3b456d397b8081ad0ddb52d0ceca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228640701638256,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59e66dff9d6757e593577e4be5a7bcf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b712747a34d00d68d998ce34e9f775f0ddf3fc9d427853334fc3d043d9bd617d,PodSandboxId:70e422e682f35fdd17cdbdad8183e193a35ac883ebbbb1b1e21fa43e0f4505f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228640607697101,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656e6a3b53b4be584918cbaf50560652,},Annotations:map[string]string{io.kubernetes.container.hash: 995ac9bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1704722e-c890-4bcf-98cd-12c68d9a3030 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.592785111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32c7c466-c68e-4ceb-b4e4-9d8b89befc4e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.592886333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32c7c466-c68e-4ceb-b4e4-9d8b89befc4e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.603133185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c208a63-5a43-47c2-80e2-95e01958bf89 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.604800474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229204604763359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c208a63-5a43-47c2-80e2-95e01958bf89 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.606455725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c7bad01-7881-4780-978f-53adb57a2ddb name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.606652307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c7bad01-7881-4780-978f-53adb57a2ddb name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:53:24 no-preload-087875 crio[720]: time="2024-06-12 21:53:24.606888696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6d77b024431184651a9e21a458220d2924f4a46103d49a982b82d76487f2ff9,PodSandboxId:f1c342424d4fa0d74624f4863e382e82f1be44d9213f285877a9484b51438e18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228661367728599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90368fec-12d9-4baf-aef6-233691b5e99d,},Annotations:map[string]string{io.kubernetes.container.hash: ab3c8dcd,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f8a87fdb0e00f5579536445325d8b2dc0cfa37844f8747f40d5357afb8cf87,PodSandboxId:31aefcd0f0a8003d4a35aec62f9a43f1dee6afbdf0995d48b1e4a19a3b1f7924,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228661055629396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsvvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6c768b-75e2-4c11-99db-1103367ccc20,},Annotations:map[string]string{io.kubernetes.container.hash: d5ad641f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bffb5002753da23404659493ed47336a599fda15e4fc48a8f22aa2146c588e85,PodSandboxId:61080e3d2ddf2e6660c3547cfa897a3b97dc067ee9f372872611c4828b04403f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228660869483777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v75tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
48ba7d-8f66-4c31-ac14-3a38e18fa249,},Annotations:map[string]string{io.kubernetes.container.hash: 728d435d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d1ee15d2a35f909b263e8c592ac6c6bd5a01dc4c45e530fd0a24db98e8eb88,PodSandboxId:580e786b47f15d101e18d13a9631f43760251be9d0147f8cbfbee81d637ed2d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718228660368472103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf1156c-ba02-4551-aefa-66379b05e066,},Annotations:map[string]string{io.kubernetes.container.hash: fb7cf440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5253531d0c365ba7a37fe180563ed113f68906bd040776c09bb7aef9562ac80e,PodSandboxId:c4d2a14f93a7daa4c51ebced3fa88df7372518d23e18408aac8ce801f85a0b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228640704921735,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fd711c83c9b417403b6a9e31847398,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d4fb81f507b1127559b8713eadff985fc51dfd8b7106a3a0c8ea9f28b027fc,PodSandboxId:c00063a5386b0f11c81d8e99f5364d71d24daa6724b1361f5d69d6edbc7610e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228640714242866,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e866e8b2a2984f62db205dab7b3e4f,},Annotations:map[string]string{io.kubernetes.container.hash: f64610c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8bcdefdd9089db199dd6927625d23ce5553cc46a0949830ebce16e23e24bf,PodSandboxId:f04141bf9a6264c590d76ba434b0444355cf3b456d397b8081ad0ddb52d0ceca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228640701638256,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59e66dff9d6757e593577e4be5a7bcf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b712747a34d00d68d998ce34e9f775f0ddf3fc9d427853334fc3d043d9bd617d,PodSandboxId:70e422e682f35fdd17cdbdad8183e193a35ac883ebbbb1b1e21fa43e0f4505f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228640607697101,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656e6a3b53b4be584918cbaf50560652,},Annotations:map[string]string{io.kubernetes.container.hash: 995ac9bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c7bad01-7881-4780-978f-53adb57a2ddb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6d77b0244311       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   f1c342424d4fa       storage-provisioner
	54f8a87fdb0e0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   31aefcd0f0a80       coredns-7db6d8ff4d-hsvvf
	bffb5002753da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   61080e3d2ddf2       coredns-7db6d8ff4d-v75tt
	50d1ee15d2a35       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                0                   580e786b47f15       kube-proxy-lnhzt
	e7d4fb81f507b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   c00063a5386b0       etcd-no-preload-087875
	5253531d0c365       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   9 minutes ago       Running             kube-controller-manager   2                   c4d2a14f93a7d       kube-controller-manager-no-preload-087875
	d2b8bcdefdd90       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   f04141bf9a626       kube-scheduler-no-preload-087875
	b712747a34d00       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   9 minutes ago       Running             kube-apiserver            2                   70e422e682f35       kube-apiserver-no-preload-087875
	
	
	==> coredns [54f8a87fdb0e00f5579536445325d8b2dc0cfa37844f8747f40d5357afb8cf87] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bffb5002753da23404659493ed47336a599fda15e4fc48a8f22aa2146c588e85] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-087875
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-087875
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=no-preload-087875
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T21_44_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:44:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-087875
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:53:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:49:33 +0000   Wed, 12 Jun 2024 21:44:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:49:33 +0000   Wed, 12 Jun 2024 21:44:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:49:33 +0000   Wed, 12 Jun 2024 21:44:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:49:33 +0000   Wed, 12 Jun 2024 21:44:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.63
	  Hostname:    no-preload-087875
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 532c93e5ec184a2db3681bc0b10a099e
	  System UUID:                532c93e5-ec18-4a2d-b368-1bc0b10a099e
	  Boot ID:                    0a715db4-7372-4169-a63b-2b81aa42ebc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-hsvvf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-7db6d8ff4d-v75tt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-no-preload-087875                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-no-preload-087875             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-no-preload-087875    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-lnhzt                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-no-preload-087875             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-569cc877fc-mdmgw              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 9m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node no-preload-087875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node no-preload-087875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node no-preload-087875 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node no-preload-087875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node no-preload-087875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node no-preload-087875 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m5s                   node-controller  Node no-preload-087875 event: Registered Node no-preload-087875 in Controller
	
	
	==> dmesg <==
	[  +0.060200] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045199] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.966412] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.482080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.623128] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.603726] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.062277] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063219] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.211055] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.139166] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.289199] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[Jun12 21:39] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[  +0.057900] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.934017] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +4.603380] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.461433] kauditd_printk_skb: 79 callbacks suppressed
	[Jun12 21:43] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.919758] systemd-fstab-generator[4009]: Ignoring "noauto" option for root device
	[Jun12 21:44] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.968509] systemd-fstab-generator[4337]: Ignoring "noauto" option for root device
	[ +13.397948] systemd-fstab-generator[4528]: Ignoring "noauto" option for root device
	[  +0.095180] kauditd_printk_skb: 14 callbacks suppressed
	[Jun12 21:45] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [e7d4fb81f507b1127559b8713eadff985fc51dfd8b7106a3a0c8ea9f28b027fc] <==
	{"level":"info","ts":"2024-06-12T21:44:01.021861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 switched to configuration voters=(11762531092656217241)"}
	{"level":"info","ts":"2024-06-12T21:44:01.022094Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cf3413fd070cd1a3","local-member-id":"a33ce7b54d42dc99","added-peer-id":"a33ce7b54d42dc99","added-peer-peer-urls":["https://192.168.72.63:2380"]}
	{"level":"info","ts":"2024-06-12T21:44:01.031004Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T21:44:01.031455Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a33ce7b54d42dc99","initial-advertise-peer-urls":["https://192.168.72.63:2380"],"listen-peer-urls":["https://192.168.72.63:2380"],"advertise-client-urls":["https://192.168.72.63:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.63:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-12T21:44:01.031609Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-12T21:44:01.03196Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.63:2380"}
	{"level":"info","ts":"2024-06-12T21:44:01.032005Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.63:2380"}
	{"level":"info","ts":"2024-06-12T21:44:01.096139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-12T21:44:01.096248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-12T21:44:01.096313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 received MsgPreVoteResp from a33ce7b54d42dc99 at term 1"}
	{"level":"info","ts":"2024-06-12T21:44:01.096425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 became candidate at term 2"}
	{"level":"info","ts":"2024-06-12T21:44:01.096457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 received MsgVoteResp from a33ce7b54d42dc99 at term 2"}
	{"level":"info","ts":"2024-06-12T21:44:01.096488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 became leader at term 2"}
	{"level":"info","ts":"2024-06-12T21:44:01.096568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a33ce7b54d42dc99 elected leader a33ce7b54d42dc99 at term 2"}
	{"level":"info","ts":"2024-06-12T21:44:01.100857Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:44:01.101254Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a33ce7b54d42dc99","local-member-attributes":"{Name:no-preload-087875 ClientURLs:[https://192.168.72.63:2379]}","request-path":"/0/members/a33ce7b54d42dc99/attributes","cluster-id":"cf3413fd070cd1a3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-12T21:44:01.101425Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:44:01.103705Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cf3413fd070cd1a3","local-member-id":"a33ce7b54d42dc99","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:44:01.104207Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:44:01.104255Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:44:01.105832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-12T21:44:01.10373Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:44:01.111696Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T21:44:01.115595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-12T21:44:01.11711Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.63:2379"}
	
	
	==> kernel <==
	 21:53:25 up 14 min,  0 users,  load average: 0.56, 0.32, 0.20
	Linux no-preload-087875 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b712747a34d00d68d998ce34e9f775f0ddf3fc9d427853334fc3d043d9bd617d] <==
	I0612 21:47:21.823310       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:49:03.458964       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:49:03.459483       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0612 21:49:04.460752       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:49:04.460879       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:49:04.460912       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:49:04.460982       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:49:04.461089       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:49:04.462356       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:50:04.461623       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:50:04.461953       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:50:04.461989       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:50:04.462752       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:50:04.462843       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:50:04.463986       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:52:04.462890       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:52:04.463002       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:52:04.463017       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:52:04.465308       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:52:04.465487       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:52:04.465625       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5253531d0c365ba7a37fe180563ed113f68906bd040776c09bb7aef9562ac80e] <==
	I0612 21:47:49.844704       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:48:19.404807       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:48:19.853365       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:48:49.410236       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:48:49.860219       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:49:19.415671       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:49:19.868408       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:49:49.422216       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:49:49.876857       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:50:19.427810       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:50:19.885285       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0612 21:50:25.291421       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="240.864µs"
	I0612 21:50:40.291791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="71.146µs"
	E0612 21:50:49.433130       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:50:49.892580       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:51:19.438142       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:51:19.900918       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:51:49.445109       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:51:49.910027       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:52:19.451152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:52:19.917867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:52:49.455865       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:52:49.926124       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:53:19.462013       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:53:19.933418       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [50d1ee15d2a35f909b263e8c592ac6c6bd5a01dc4c45e530fd0a24db98e8eb88] <==
	I0612 21:44:20.697702       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:44:20.716607       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.63"]
	I0612 21:44:21.298595       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:44:21.298696       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:44:21.298784       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:44:21.338956       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:44:21.339309       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:44:21.339333       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:44:21.340794       1 config.go:192] "Starting service config controller"
	I0612 21:44:21.340825       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:44:21.340850       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:44:21.340853       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:44:21.344037       1 config.go:319] "Starting node config controller"
	I0612 21:44:21.344068       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:44:21.441013       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 21:44:21.441074       1 shared_informer.go:320] Caches are synced for service config
	I0612 21:44:21.444495       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d2b8bcdefdd9089db199dd6927625d23ce5553cc46a0949830ebce16e23e24bf] <==
	E0612 21:44:03.489350       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 21:44:03.489353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 21:44:04.304264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 21:44:04.304387       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 21:44:04.326434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 21:44:04.326627       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 21:44:04.347248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0612 21:44:04.349972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0612 21:44:04.373781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 21:44:04.373882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 21:44:04.397288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0612 21:44:04.397448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0612 21:44:04.468677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0612 21:44:04.468803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0612 21:44:04.492945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0612 21:44:04.492974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0612 21:44:04.585489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 21:44:04.585594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 21:44:04.680183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0612 21:44:04.680340       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0612 21:44:04.692101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0612 21:44:04.692228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 21:44:04.794497       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 21:44:04.794681       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 21:44:06.973814       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:51:06 no-preload-087875 kubelet[4344]: E0612 21:51:06.294888    4344 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:51:06 no-preload-087875 kubelet[4344]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:51:06 no-preload-087875 kubelet[4344]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:51:06 no-preload-087875 kubelet[4344]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:51:06 no-preload-087875 kubelet[4344]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:51:16 no-preload-087875 kubelet[4344]: E0612 21:51:16.272500    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:51:28 no-preload-087875 kubelet[4344]: E0612 21:51:28.274094    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:51:43 no-preload-087875 kubelet[4344]: E0612 21:51:43.273321    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:51:56 no-preload-087875 kubelet[4344]: E0612 21:51:56.276218    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:52:06 no-preload-087875 kubelet[4344]: E0612 21:52:06.296212    4344 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:52:06 no-preload-087875 kubelet[4344]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:52:06 no-preload-087875 kubelet[4344]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:52:06 no-preload-087875 kubelet[4344]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:52:06 no-preload-087875 kubelet[4344]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:52:07 no-preload-087875 kubelet[4344]: E0612 21:52:07.273600    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:52:22 no-preload-087875 kubelet[4344]: E0612 21:52:22.273697    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:52:35 no-preload-087875 kubelet[4344]: E0612 21:52:35.272085    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:52:47 no-preload-087875 kubelet[4344]: E0612 21:52:47.272420    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:53:02 no-preload-087875 kubelet[4344]: E0612 21:53:02.273986    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:53:06 no-preload-087875 kubelet[4344]: E0612 21:53:06.293396    4344 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:53:06 no-preload-087875 kubelet[4344]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:53:06 no-preload-087875 kubelet[4344]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:53:06 no-preload-087875 kubelet[4344]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:53:06 no-preload-087875 kubelet[4344]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:53:15 no-preload-087875 kubelet[4344]: E0612 21:53:15.272757    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	
	
	==> storage-provisioner [b6d77b024431184651a9e21a458220d2924f4a46103d49a982b82d76487f2ff9] <==
	I0612 21:44:21.665248       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0612 21:44:21.706069       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0612 21:44:21.706156       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0612 21:44:21.745263       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0612 21:44:21.745444       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-087875_6913d51f-6e50-41ee-ab1b-5c13c878778d!
	I0612 21:44:21.753575       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8d24827-03ed-4e6c-852e-2afbc0f4308a", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-087875_6913d51f-6e50-41ee-ab1b-5c13c878778d became leader
	I0612 21:44:21.845978       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-087875_6913d51f-6e50-41ee-ab1b-5c13c878778d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-087875 -n no-preload-087875
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-087875 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-mdmgw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-087875 describe pod metrics-server-569cc877fc-mdmgw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-087875 describe pod metrics-server-569cc877fc-mdmgw: exit status 1 (63.868126ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-mdmgw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-087875 describe pod metrics-server-569cc877fc-mdmgw: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:46:48.613879   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:46:57.388084   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:47:06.289107   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:47:29.498411   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:47:49.562959   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:47:59.754098   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:48:14.295610   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:48:14.422157   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:48:29.333864   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:48:52.543237   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:49:37.340181   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:49:37.468408   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:49:56.704697   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:50:04.134631   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:50:34.342916   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:51:48.612939   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:52:06.289114   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:52:29.498360   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:53:14.295527   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:53:14.422074   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:54:51.663072   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:54:56.704485   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:55:04.133654   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:55:34.342985   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-983302 -n old-k8s-version-983302
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 2 (230.945388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-983302" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 2 (220.405006ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-983302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-983302 logs -n 25: (1.612678976s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| delete  | -p bridge-701638                                       | bridge-701638                | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-576552 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | disable-driver-mounts-576552                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-087875             | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-376087  | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-591460            | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-983302        | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-087875                  | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-376087       | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:42 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-591460                 | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-983302             | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:33:52
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:33:52.855557   80762 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:33:52.855829   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.855839   80762 out.go:304] Setting ErrFile to fd 2...
	I0612 21:33:52.855845   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.856037   80762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:33:52.856582   80762 out.go:298] Setting JSON to false
	I0612 21:33:52.857472   80762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8178,"bootTime":1718219855,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:33:52.857527   80762 start.go:139] virtualization: kvm guest
	I0612 21:33:52.859369   80762 out.go:177] * [old-k8s-version-983302] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:33:52.860886   80762 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:33:52.860907   80762 notify.go:220] Checking for updates...
	I0612 21:33:52.862185   80762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:33:52.863642   80762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:33:52.865031   80762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:33:52.866306   80762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:33:52.867535   80762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:33:52.869148   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:33:52.869530   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.869597   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.884278   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0612 21:33:52.884743   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.885211   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.885234   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.885575   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.885768   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.887577   80762 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0612 21:33:52.888972   80762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:33:52.889265   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.889296   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.903649   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
	I0612 21:33:52.904087   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.904500   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.904518   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.904831   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.904988   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.939030   80762 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:33:52.940484   80762 start.go:297] selected driver: kvm2
	I0612 21:33:52.940497   80762 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.940622   80762 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:33:52.941314   80762 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.941389   80762 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:33:52.956273   80762 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:33:52.956646   80762 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:33:52.956674   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:33:52.956682   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:33:52.956715   80762 start.go:340] cluster config:
	{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.956828   80762 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.958634   80762 out.go:177] * Starting "old-k8s-version-983302" primary control-plane node in "old-k8s-version-983302" cluster
	I0612 21:33:52.959924   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:33:52.959963   80762 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0612 21:33:52.959970   80762 cache.go:56] Caching tarball of preloaded images
	I0612 21:33:52.960065   80762 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:33:52.960079   80762 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0612 21:33:52.960190   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:33:52.960397   80762 start.go:360] acquireMachinesLock for old-k8s-version-983302: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:33:57.423439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:00.495475   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:06.575478   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:09.647560   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:15.727510   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:18.799491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:24.879423   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:27.951495   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:34.031457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:37.103569   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:43.183470   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:46.255491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:52.335452   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:55.407544   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:01.487489   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:04.559546   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:10.639492   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:13.711372   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:19.791460   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:22.863455   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:28.943506   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:32.015443   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:38.095436   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:41.167526   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:47.247485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:50.319435   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:56.399471   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:59.471485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:05.551493   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:08.623467   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:14.703401   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:17.775479   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:23.855516   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:26.927418   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:33.007439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:36.079449   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:42.159480   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:45.231482   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:51.311424   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:54.383524   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:00.463466   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:03.535465   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:09.615457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:12.687462   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:18.767463   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:21.839431   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:24.843967   80243 start.go:364] duration metric: took 4m34.377488728s to acquireMachinesLock for "default-k8s-diff-port-376087"
	I0612 21:37:24.844034   80243 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:24.844046   80243 fix.go:54] fixHost starting: 
	I0612 21:37:24.844649   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:24.844689   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:24.859743   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0612 21:37:24.860227   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:24.860659   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:37:24.860680   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:24.861055   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:24.861352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:24.861550   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:37:24.863507   80243 fix.go:112] recreateIfNeeded on default-k8s-diff-port-376087: state=Stopped err=<nil>
	I0612 21:37:24.863538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	W0612 21:37:24.863708   80243 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:24.865564   80243 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-376087" ...
	I0612 21:37:24.866899   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Start
	I0612 21:37:24.867064   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring networks are active...
	I0612 21:37:24.867951   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network default is active
	I0612 21:37:24.868390   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network mk-default-k8s-diff-port-376087 is active
	I0612 21:37:24.868746   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Getting domain xml...
	I0612 21:37:24.869408   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Creating domain...
	I0612 21:37:24.841481   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:24.841529   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.841912   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:37:24.841938   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.842149   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:37:24.843818   80157 machine.go:97] duration metric: took 4m37.413209096s to provisionDockerMachine
	I0612 21:37:24.843853   80157 fix.go:56] duration metric: took 4m37.434262933s for fixHost
	I0612 21:37:24.843860   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 4m37.434303466s
	W0612 21:37:24.843897   80157 start.go:713] error starting host: provision: host is not running
	W0612 21:37:24.843971   80157 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0612 21:37:24.843980   80157 start.go:728] Will try again in 5 seconds ...
	I0612 21:37:26.077364   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting to get IP...
	I0612 21:37:26.078173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078646   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078686   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.078611   81491 retry.go:31] will retry after 224.429366ms: waiting for machine to come up
	I0612 21:37:26.305227   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305668   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305699   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.305627   81491 retry.go:31] will retry after 298.325251ms: waiting for machine to come up
	I0612 21:37:26.605155   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605587   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605622   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.605558   81491 retry.go:31] will retry after 327.789765ms: waiting for machine to come up
	I0612 21:37:26.935066   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935536   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935567   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.935477   81491 retry.go:31] will retry after 381.56012ms: waiting for machine to come up
	I0612 21:37:27.319036   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.319429   81491 retry.go:31] will retry after 474.663822ms: waiting for machine to come up
	I0612 21:37:27.796149   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796596   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796635   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.796564   81491 retry.go:31] will retry after 943.868595ms: waiting for machine to come up
	I0612 21:37:28.741715   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742226   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:28.742180   81491 retry.go:31] will retry after 1.014472282s: waiting for machine to come up
	I0612 21:37:29.758384   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758928   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758947   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:29.758867   81491 retry.go:31] will retry after 971.872729ms: waiting for machine to come up
	I0612 21:37:29.845647   80157 start.go:360] acquireMachinesLock for no-preload-087875: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:37:30.732362   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732794   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:30.732742   81491 retry.go:31] will retry after 1.352202491s: waiting for machine to come up
	I0612 21:37:32.087272   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087702   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087726   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:32.087663   81491 retry.go:31] will retry after 2.276552983s: waiting for machine to come up
	I0612 21:37:34.367159   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367579   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367613   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:34.367520   81491 retry.go:31] will retry after 1.785262755s: waiting for machine to come up
	I0612 21:37:36.154927   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155412   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:36.155357   81491 retry.go:31] will retry after 3.309693081s: waiting for machine to come up
	I0612 21:37:39.468800   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469443   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469469   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:39.469393   81491 retry.go:31] will retry after 4.284995408s: waiting for machine to come up
	I0612 21:37:45.096430   80404 start.go:364] duration metric: took 4m40.295909999s to acquireMachinesLock for "embed-certs-591460"
	I0612 21:37:45.096485   80404 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:45.096490   80404 fix.go:54] fixHost starting: 
	I0612 21:37:45.096932   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:45.096972   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:45.113819   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
	I0612 21:37:45.114290   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:45.114823   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:37:45.114843   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:45.115208   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:45.115415   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:37:45.115578   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:37:45.117131   80404 fix.go:112] recreateIfNeeded on embed-certs-591460: state=Stopped err=<nil>
	I0612 21:37:45.117156   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	W0612 21:37:45.117324   80404 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:45.119535   80404 out.go:177] * Restarting existing kvm2 VM for "embed-certs-591460" ...
	I0612 21:37:43.759195   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759548   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Found IP for machine: 192.168.61.80
	I0612 21:37:43.759575   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has current primary IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759583   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserving static IP address...
	I0612 21:37:43.760031   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserved static IP address: 192.168.61.80
	I0612 21:37:43.760063   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.760075   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for SSH to be available...
	I0612 21:37:43.760120   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | skip adding static IP to network mk-default-k8s-diff-port-376087 - found existing host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"}
	I0612 21:37:43.760134   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Getting to WaitForSSH function...
	I0612 21:37:43.762259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.762626   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762741   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH client type: external
	I0612 21:37:43.762771   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa (-rw-------)
	I0612 21:37:43.762804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:37:43.762842   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | About to run SSH command:
	I0612 21:37:43.762860   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | exit 0
	I0612 21:37:43.891446   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | SSH cmd err, output: <nil>: 
	I0612 21:37:43.891831   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetConfigRaw
	I0612 21:37:43.892485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:43.895220   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895625   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.895656   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895928   80243 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/config.json ...
	I0612 21:37:43.896140   80243 machine.go:94] provisionDockerMachine start ...
	I0612 21:37:43.896161   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:43.896388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:43.898898   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899317   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.899346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899539   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:43.899727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.899868   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.900019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:43.900171   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:43.900360   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:43.900371   80243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:37:44.016295   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:37:44.016327   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016577   80243 buildroot.go:166] provisioning hostname "default-k8s-diff-port-376087"
	I0612 21:37:44.016602   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.019396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019732   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.019763   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019881   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.020084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020214   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020418   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.020612   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.020803   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.020820   80243 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376087 && echo "default-k8s-diff-port-376087" | sudo tee /etc/hostname
	I0612 21:37:44.146019   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376087
	
	I0612 21:37:44.146049   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.148758   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149204   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.149238   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149356   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.149538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149873   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.150013   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.150187   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.150204   80243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376087/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:37:44.272821   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:44.272852   80243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:37:44.272887   80243 buildroot.go:174] setting up certificates
	I0612 21:37:44.272895   80243 provision.go:84] configureAuth start
	I0612 21:37:44.272903   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.273185   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:44.275991   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276337   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.276366   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276591   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.279011   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279370   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.279396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279521   80243 provision.go:143] copyHostCerts
	I0612 21:37:44.279576   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:37:44.279585   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:37:44.279649   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:37:44.279740   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:37:44.279748   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:37:44.279770   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:37:44.279828   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:37:44.279835   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:37:44.279855   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:37:44.279914   80243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376087 san=[127.0.0.1 192.168.61.80 default-k8s-diff-port-376087 localhost minikube]
	I0612 21:37:44.410909   80243 provision.go:177] copyRemoteCerts
	I0612 21:37:44.410974   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:37:44.410999   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.413740   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414140   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.414173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414406   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.414597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.414759   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.414904   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.501641   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:37:44.526082   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0612 21:37:44.549455   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:37:44.572447   80243 provision.go:87] duration metric: took 299.539656ms to configureAuth
	I0612 21:37:44.572473   80243 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:37:44.572632   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:37:44.572731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.575518   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.575913   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.575948   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.576170   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.576383   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576553   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576754   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.576913   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.577134   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.577155   80243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:37:44.851891   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:37:44.851922   80243 machine.go:97] duration metric: took 955.766062ms to provisionDockerMachine
	I0612 21:37:44.851936   80243 start.go:293] postStartSetup for "default-k8s-diff-port-376087" (driver="kvm2")
	I0612 21:37:44.851951   80243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:37:44.851970   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:44.852318   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:37:44.852352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.855231   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855556   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.855595   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.855935   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.856127   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.856260   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.941821   80243 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:37:44.946013   80243 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:37:44.946052   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:37:44.946120   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:37:44.946200   80243 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:37:44.946281   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:37:44.955467   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:44.979379   80243 start.go:296] duration metric: took 127.428385ms for postStartSetup
	I0612 21:37:44.979421   80243 fix.go:56] duration metric: took 20.135375416s for fixHost
	I0612 21:37:44.979445   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.981891   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.982287   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982520   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.982713   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.982920   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.983040   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.983220   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.983450   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.983467   80243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:37:45.096266   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228265.072559389
	
	I0612 21:37:45.096288   80243 fix.go:216] guest clock: 1718228265.072559389
	I0612 21:37:45.096295   80243 fix.go:229] Guest: 2024-06-12 21:37:45.072559389 +0000 UTC Remote: 2024-06-12 21:37:44.979426071 +0000 UTC m=+294.653210040 (delta=93.133318ms)
	I0612 21:37:45.096313   80243 fix.go:200] guest clock delta is within tolerance: 93.133318ms
	I0612 21:37:45.096318   80243 start.go:83] releasing machines lock for "default-k8s-diff-port-376087", held for 20.252307995s
	I0612 21:37:45.096346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.096683   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:45.099332   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099761   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.099805   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099902   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100560   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100841   80243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:37:45.100880   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.100981   80243 ssh_runner.go:195] Run: cat /version.json
	I0612 21:37:45.101007   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.103590   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.103774   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104052   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104186   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104202   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104210   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104417   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104430   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104650   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104651   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104837   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104852   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.104993   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.208199   80243 ssh_runner.go:195] Run: systemctl --version
	I0612 21:37:45.214375   80243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:37:45.370991   80243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:37:45.378676   80243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:37:45.378744   80243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:37:45.400622   80243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:37:45.400642   80243 start.go:494] detecting cgroup driver to use...
	I0612 21:37:45.400709   80243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:37:45.416775   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:37:45.430261   80243 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:37:45.430314   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:37:45.445482   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:37:45.461471   80243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:37:45.578411   80243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:37:45.750493   80243 docker.go:233] disabling docker service ...
	I0612 21:37:45.750556   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:37:45.769072   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:37:45.784755   80243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:37:45.907970   80243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:37:46.031847   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:37:46.046473   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:37:46.067764   80243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:37:46.067813   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.080604   80243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:37:46.080660   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.093611   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.104443   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.117070   80243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:37:46.128759   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.139977   80243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.157893   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.168896   80243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:37:46.179765   80243 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:37:46.179816   80243 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:37:46.194059   80243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:37:46.205474   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:46.322562   80243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:37:46.479073   80243 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:37:46.479149   80243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:37:46.484557   80243 start.go:562] Will wait 60s for crictl version
	I0612 21:37:46.484609   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:37:46.488403   80243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:37:46.529210   80243 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:37:46.529301   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.561476   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.594477   80243 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:37:45.120900   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Start
	I0612 21:37:45.121084   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring networks are active...
	I0612 21:37:45.121776   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network default is active
	I0612 21:37:45.122108   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network mk-embed-certs-591460 is active
	I0612 21:37:45.122554   80404 main.go:141] libmachine: (embed-certs-591460) Getting domain xml...
	I0612 21:37:45.123260   80404 main.go:141] libmachine: (embed-certs-591460) Creating domain...
	I0612 21:37:46.357867   80404 main.go:141] libmachine: (embed-certs-591460) Waiting to get IP...
	I0612 21:37:46.358704   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.359164   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.359265   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.359144   81627 retry.go:31] will retry after 278.948395ms: waiting for machine to come up
	I0612 21:37:46.639971   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.640491   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.640523   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.640433   81627 retry.go:31] will retry after 342.550517ms: waiting for machine to come up
	I0612 21:37:46.985065   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.985590   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.985618   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.985548   81627 retry.go:31] will retry after 297.683214ms: waiting for machine to come up
	I0612 21:37:47.285192   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.285650   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.285688   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.285615   81627 retry.go:31] will retry after 415.994572ms: waiting for machine to come up
	I0612 21:37:47.702894   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.703398   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.703424   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.703353   81627 retry.go:31] will retry after 672.441633ms: waiting for machine to come up
	I0612 21:37:48.377227   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:48.377772   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:48.377802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:48.377735   81627 retry.go:31] will retry after 790.165478ms: waiting for machine to come up
	I0612 21:37:49.169651   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:49.170194   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:49.170224   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:49.170134   81627 retry.go:31] will retry after 953.609739ms: waiting for machine to come up
	I0612 21:37:46.595772   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:46.599221   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599682   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:46.599712   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599919   80243 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0612 21:37:46.604573   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:46.617274   80243 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:37:46.617388   80243 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:37:46.617443   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:46.663227   80243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:37:46.663306   80243 ssh_runner.go:195] Run: which lz4
	I0612 21:37:46.667878   80243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:37:46.672384   80243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:37:46.672416   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:37:48.195844   80243 crio.go:462] duration metric: took 1.527996646s to copy over tarball
	I0612 21:37:48.195908   80243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:37:50.125800   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:50.126305   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:50.126337   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:50.126260   81627 retry.go:31] will retry after 938.251336ms: waiting for machine to come up
	I0612 21:37:51.065851   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:51.066225   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:51.066247   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:51.066194   81627 retry.go:31] will retry after 1.635454683s: waiting for machine to come up
	I0612 21:37:52.704193   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:52.704663   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:52.704687   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:52.704633   81627 retry.go:31] will retry after 1.56455027s: waiting for machine to come up
	I0612 21:37:54.271391   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:54.271873   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:54.271919   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:54.271826   81627 retry.go:31] will retry after 2.052574222s: waiting for machine to come up
	I0612 21:37:50.464553   80243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.268615304s)
	I0612 21:37:50.464601   80243 crio.go:469] duration metric: took 2.268715227s to extract the tarball
	I0612 21:37:50.464612   80243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:37:50.502406   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:50.550796   80243 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:37:50.550821   80243 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:37:50.550831   80243 kubeadm.go:928] updating node { 192.168.61.80 8444 v1.30.1 crio true true} ...
	I0612 21:37:50.550957   80243 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:37:50.551042   80243 ssh_runner.go:195] Run: crio config
	I0612 21:37:50.603232   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:50.603256   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:50.603268   80243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:37:50.603299   80243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.80 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376087 NodeName:default-k8s-diff-port-376087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:37:50.603459   80243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.80
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-376087"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:37:50.603524   80243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:37:50.614003   80243 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:37:50.614082   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:37:50.623416   80243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0612 21:37:50.640203   80243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:37:50.656668   80243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0612 21:37:50.674601   80243 ssh_runner.go:195] Run: grep 192.168.61.80	control-plane.minikube.internal$ /etc/hosts
	I0612 21:37:50.678858   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:50.692389   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:50.822225   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:37:50.840703   80243 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087 for IP: 192.168.61.80
	I0612 21:37:50.840734   80243 certs.go:194] generating shared ca certs ...
	I0612 21:37:50.840758   80243 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:37:50.840936   80243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:37:50.840986   80243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:37:50.840999   80243 certs.go:256] generating profile certs ...
	I0612 21:37:50.841133   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/client.key
	I0612 21:37:50.841200   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key.0afce446
	I0612 21:37:50.841238   80243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key
	I0612 21:37:50.841357   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:37:50.841398   80243 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:37:50.841409   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:37:50.841438   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:37:50.841469   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:37:50.841489   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:37:50.841529   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:50.842311   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:37:50.880075   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:37:50.914504   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:37:50.945724   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:37:50.975702   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0612 21:37:51.009817   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:37:51.039086   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:37:51.064146   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:37:51.088483   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:37:51.112785   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:37:51.136192   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:37:51.159239   80243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:37:51.175719   80243 ssh_runner.go:195] Run: openssl version
	I0612 21:37:51.181707   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:37:51.193498   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198415   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198475   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.204601   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:37:51.216354   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:37:51.231979   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.236952   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.237018   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.243461   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:37:51.258481   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:37:51.273412   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279356   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279420   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.285551   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:37:51.298066   80243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:37:51.302791   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:37:51.309402   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:37:51.316170   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:37:51.322785   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:37:51.329066   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:37:51.335031   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:37:51.340945   80243 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:37:51.341082   80243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:37:51.341143   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.383011   80243 cri.go:89] found id: ""
	I0612 21:37:51.383134   80243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:37:51.394768   80243 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:37:51.394794   80243 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:37:51.394800   80243 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:37:51.394852   80243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:37:51.408147   80243 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:37:51.409094   80243 kubeconfig.go:125] found "default-k8s-diff-port-376087" server: "https://192.168.61.80:8444"
	I0612 21:37:51.411221   80243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:37:51.421897   80243 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.80
	I0612 21:37:51.421934   80243 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:37:51.421949   80243 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:37:51.422029   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.470321   80243 cri.go:89] found id: ""
	I0612 21:37:51.470441   80243 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:37:51.488369   80243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:37:51.498367   80243 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:37:51.498388   80243 kubeadm.go:156] found existing configuration files:
	
	I0612 21:37:51.498449   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0612 21:37:51.510212   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:37:51.510287   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:37:51.520231   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0612 21:37:51.529270   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:37:51.529339   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:37:51.538902   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.548593   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:37:51.548652   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.558533   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0612 21:37:51.567995   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:37:51.568063   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:37:51.577695   80243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:37:51.587794   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:51.718155   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.602448   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.820456   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.901167   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.977502   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:37:52.977606   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.477802   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.977879   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.995753   80243 api_server.go:72] duration metric: took 1.018251882s to wait for apiserver process to appear ...
	I0612 21:37:53.995788   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:37:53.995812   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:53.996308   80243 api_server.go:269] stopped: https://192.168.61.80:8444/healthz: Get "https://192.168.61.80:8444/healthz": dial tcp 192.168.61.80:8444: connect: connection refused
	I0612 21:37:54.496045   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.293362   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:37:57.293394   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:37:57.293408   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.395854   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.395886   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.496122   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.505090   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.505124   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.996334   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.000606   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:58.000646   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:58.496177   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.504422   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:37:58.513123   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:37:58.513150   80243 api_server.go:131] duration metric: took 4.517354722s to wait for apiserver health ...
	I0612 21:37:58.513158   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:58.513163   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:58.514696   80243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:37:56.325937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:56.326316   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:56.326343   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:56.326261   81627 retry.go:31] will retry after 3.51636746s: waiting for machine to come up
	I0612 21:37:58.516091   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:37:58.541034   80243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:37:58.585635   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:37:58.596829   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:37:58.596859   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:37:58.596867   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:37:58.596873   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:37:58.596883   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:37:58.596888   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 21:37:58.596893   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:37:58.596899   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:37:58.596904   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:37:58.596910   80243 system_pods.go:74] duration metric: took 11.248328ms to wait for pod list to return data ...
	I0612 21:37:58.596917   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:37:58.600081   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:37:58.600107   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:37:58.600119   80243 node_conditions.go:105] duration metric: took 3.197181ms to run NodePressure ...
	I0612 21:37:58.600134   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:58.911963   80243 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918455   80243 kubeadm.go:733] kubelet initialised
	I0612 21:37:58.918475   80243 kubeadm.go:734] duration metric: took 6.490654ms waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918482   80243 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:37:58.924427   80243 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.930290   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930329   80243 pod_ready.go:81] duration metric: took 5.86525ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.930339   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930346   80243 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.935394   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935416   80243 pod_ready.go:81] duration metric: took 5.061639ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.935426   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935431   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.940238   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940268   80243 pod_ready.go:81] duration metric: took 4.829842ms for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.940286   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940295   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.989649   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989686   80243 pod_ready.go:81] duration metric: took 49.380431ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.989702   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989711   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.389868   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389903   80243 pod_ready.go:81] duration metric: took 400.174877ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.389912   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389918   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.790398   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790425   80243 pod_ready.go:81] duration metric: took 400.499157ms for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.790435   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790449   80243 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:00.189506   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189533   80243 pod_ready.go:81] duration metric: took 399.075983ms for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:00.189551   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189559   80243 pod_ready.go:38] duration metric: took 1.271068537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:00.189574   80243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:38:00.201480   80243 ops.go:34] apiserver oom_adj: -16
	I0612 21:38:00.201504   80243 kubeadm.go:591] duration metric: took 8.806697524s to restartPrimaryControlPlane
	I0612 21:38:00.201514   80243 kubeadm.go:393] duration metric: took 8.860579681s to StartCluster
	I0612 21:38:00.201536   80243 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.201601   80243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:00.203106   80243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.203416   80243 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:38:00.205568   80243 out.go:177] * Verifying Kubernetes components...
	I0612 21:38:00.203448   80243 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:38:00.203614   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:00.207110   80243 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207120   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:00.207120   80243 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207143   80243 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207166   80243 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207193   80243 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:38:00.207187   80243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376087"
	I0612 21:38:00.207208   80243 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207222   80243 addons.go:243] addon metrics-server should already be in state true
	I0612 21:38:00.207230   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207263   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207490   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207511   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207519   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207544   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207553   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207572   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.222521   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0612 21:38:00.222979   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.223496   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.223523   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.223899   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.224519   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.224555   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.227511   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0612 21:38:00.227543   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33041
	I0612 21:38:00.227874   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.227930   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.228402   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228409   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228426   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228471   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228776   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228780   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228952   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.229291   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.229323   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.232640   80243 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.232662   80243 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:38:00.232690   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.233072   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.233103   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.240883   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38355
	I0612 21:38:00.241363   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.241839   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.241861   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.242217   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.242434   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.244544   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.244604   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0612 21:38:00.246924   80243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:38:00.244915   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.248406   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:38:00.248430   80243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:38:00.248451   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.248861   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.248887   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.249211   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.249431   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.251070   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.251137   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0612 21:38:00.252729   80243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:00.251644   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.252033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.252601   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.254033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.254079   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.254111   80243 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.254127   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:38:00.254148   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.254211   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.254399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.254515   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.254542   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.254712   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.254926   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.256878   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.256948   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.257836   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258073   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.258105   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.258993   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.259141   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.259283   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.272822   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I0612 21:38:00.273238   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.273710   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.273734   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.274221   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.274411   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.276056   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.276286   80243 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.276302   80243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:38:00.276323   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.279285   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279351   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.279400   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.279675   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.279809   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.279940   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.392656   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:00.411972   80243 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:00.502108   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.504572   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:38:00.504590   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:38:00.522021   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:38:00.522057   80243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:38:00.538366   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.541981   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:00.541999   80243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:38:00.561335   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:01.519955   80243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017815416s)
	I0612 21:38:01.520006   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520087   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520100   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520312   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520334   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520343   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520350   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520422   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520435   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520444   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520452   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520554   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520573   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520647   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520678   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.520680   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.528807   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.528827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.529143   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.529162   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.529166   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556376   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.556701   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556750   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.556762   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.556780   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556791   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.557157   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.557179   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.557190   80243 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-376087"
	I0612 21:38:01.559103   80243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:37:59.844024   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:59.844481   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:59.844505   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:59.844433   81627 retry.go:31] will retry after 3.77902453s: waiting for machine to come up
	I0612 21:38:03.626861   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627380   80404 main.go:141] libmachine: (embed-certs-591460) Found IP for machine: 192.168.39.147
	I0612 21:38:03.627399   80404 main.go:141] libmachine: (embed-certs-591460) Reserving static IP address...
	I0612 21:38:03.627416   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has current primary IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627917   80404 main.go:141] libmachine: (embed-certs-591460) Reserved static IP address: 192.168.39.147
	I0612 21:38:03.627964   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.627981   80404 main.go:141] libmachine: (embed-certs-591460) Waiting for SSH to be available...
	I0612 21:38:03.628011   80404 main.go:141] libmachine: (embed-certs-591460) DBG | skip adding static IP to network mk-embed-certs-591460 - found existing host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"}
	I0612 21:38:03.628030   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Getting to WaitForSSH function...
	I0612 21:38:03.630082   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630480   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.630581   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630762   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH client type: external
	I0612 21:38:03.630802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa (-rw-------)
	I0612 21:38:03.630846   80404 main.go:141] libmachine: (embed-certs-591460) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:03.630872   80404 main.go:141] libmachine: (embed-certs-591460) DBG | About to run SSH command:
	I0612 21:38:03.630882   80404 main.go:141] libmachine: (embed-certs-591460) DBG | exit 0
	I0612 21:38:03.755304   80404 main.go:141] libmachine: (embed-certs-591460) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:03.755720   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetConfigRaw
	I0612 21:38:03.756310   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:03.758608   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.758927   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.758966   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.759153   80404 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/config.json ...
	I0612 21:38:03.759390   80404 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:03.759414   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:03.759641   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.761954   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762215   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.762244   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762371   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.762525   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762689   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762842   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.762995   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.763183   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.763206   80404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:03.867900   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:03.867936   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868185   80404 buildroot.go:166] provisioning hostname "embed-certs-591460"
	I0612 21:38:03.868210   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868430   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.871347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871690   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.871721   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871816   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.871977   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872130   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872258   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.872408   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.872588   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.872604   80404 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-591460 && echo "embed-certs-591460" | sudo tee /etc/hostname
	I0612 21:38:03.990526   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-591460
	
	I0612 21:38:03.990550   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.993057   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993458   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.993485   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993646   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.993830   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.993985   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.994125   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.994297   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.994499   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.994524   80404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-591460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-591460/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-591460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:04.120595   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:04.120623   80404 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:04.120640   80404 buildroot.go:174] setting up certificates
	I0612 21:38:04.120650   80404 provision.go:84] configureAuth start
	I0612 21:38:04.120658   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:04.120910   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.123483   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.123854   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.123879   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.124153   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.126901   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127293   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.127318   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127494   80404 provision.go:143] copyHostCerts
	I0612 21:38:04.127554   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:04.127566   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:04.127635   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:04.127736   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:04.127747   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:04.127785   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:04.127860   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:04.127870   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:04.127896   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:04.127960   80404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.embed-certs-591460 san=[127.0.0.1 192.168.39.147 embed-certs-591460 localhost minikube]
	I0612 21:38:04.265296   80404 provision.go:177] copyRemoteCerts
	I0612 21:38:04.265361   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:04.265392   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.267703   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268044   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.268090   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268244   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.268421   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.268583   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.268780   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.349440   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:04.374868   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0612 21:38:04.398419   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:04.423319   80404 provision.go:87] duration metric: took 302.657777ms to configureAuth
	I0612 21:38:04.423353   80404 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:04.423514   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:04.423586   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.426301   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426612   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.426641   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426796   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.426971   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427186   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427331   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.427553   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.427723   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.427739   80404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:04.689161   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:04.689199   80404 machine.go:97] duration metric: took 929.790838ms to provisionDockerMachine
	I0612 21:38:04.689212   80404 start.go:293] postStartSetup for "embed-certs-591460" (driver="kvm2")
	I0612 21:38:04.689223   80404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:04.689242   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.689569   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:04.689616   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.692484   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.692841   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.692864   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.693002   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.693191   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.693326   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.693469   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.923975   80762 start.go:364] duration metric: took 4m11.963543792s to acquireMachinesLock for "old-k8s-version-983302"
	I0612 21:38:04.924056   80762 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:04.924068   80762 fix.go:54] fixHost starting: 
	I0612 21:38:04.924507   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:04.924543   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:04.942022   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0612 21:38:04.942428   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:04.942891   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:38:04.942917   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:04.943345   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:04.943553   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:04.943726   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetState
	I0612 21:38:04.945403   80762 fix.go:112] recreateIfNeeded on old-k8s-version-983302: state=Stopped err=<nil>
	I0612 21:38:04.945427   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	W0612 21:38:04.945581   80762 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:04.947672   80762 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-983302" ...
	I0612 21:38:01.560387   80243 addons.go:510] duration metric: took 1.356939902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:38:02.416070   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.416451   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.774287   80404 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:04.778568   80404 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:04.778596   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:04.778667   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:04.778740   80404 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:04.778819   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:04.788602   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:04.813969   80404 start.go:296] duration metric: took 124.741162ms for postStartSetup
	I0612 21:38:04.814020   80404 fix.go:56] duration metric: took 19.717527303s for fixHost
	I0612 21:38:04.814049   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.816907   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817268   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.817294   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817511   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.817728   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.817905   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.818087   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.818293   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.818501   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.818516   80404 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:04.923846   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228284.879920542
	
	I0612 21:38:04.923868   80404 fix.go:216] guest clock: 1718228284.879920542
	I0612 21:38:04.923874   80404 fix.go:229] Guest: 2024-06-12 21:38:04.879920542 +0000 UTC Remote: 2024-06-12 21:38:04.814026698 +0000 UTC m=+300.152179547 (delta=65.893844ms)
	I0612 21:38:04.923890   80404 fix.go:200] guest clock delta is within tolerance: 65.893844ms
	I0612 21:38:04.923894   80404 start.go:83] releasing machines lock for "embed-certs-591460", held for 19.827427255s
	I0612 21:38:04.923920   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.924155   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.926708   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927102   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.927146   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927281   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927788   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927955   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.928043   80404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:04.928099   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.928158   80404 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:04.928182   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.930931   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931237   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931377   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931415   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931561   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931587   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931592   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931742   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931790   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.932111   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.932127   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.932250   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:05.009184   80404 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:05.035746   80404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:05.181527   80404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:05.189035   80404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:05.189113   80404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:05.205860   80404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:05.205886   80404 start.go:494] detecting cgroup driver to use...
	I0612 21:38:05.205957   80404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:05.223913   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:05.239598   80404 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:05.239679   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:05.253501   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:05.268094   80404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:05.397260   80404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:05.560454   80404 docker.go:233] disabling docker service ...
	I0612 21:38:05.560532   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:05.579197   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:05.593420   80404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:05.728145   80404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:05.860041   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:05.876025   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:05.895242   80404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:05.895336   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.906575   80404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:05.906662   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.918248   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.929178   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.942169   80404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:05.953542   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.969045   80404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.989509   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:06.001532   80404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:06.012676   80404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:06.012740   80404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:06.030028   80404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:06.048168   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:06.190039   80404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:06.349088   80404 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:06.349151   80404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:06.355251   80404 start.go:562] Will wait 60s for crictl version
	I0612 21:38:06.355321   80404 ssh_runner.go:195] Run: which crictl
	I0612 21:38:06.359456   80404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:06.400450   80404 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:06.400525   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.430078   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.461616   80404 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:04.949078   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .Start
	I0612 21:38:04.949226   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring networks are active...
	I0612 21:38:04.949936   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network default is active
	I0612 21:38:04.950371   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network mk-old-k8s-version-983302 is active
	I0612 21:38:04.950813   80762 main.go:141] libmachine: (old-k8s-version-983302) Getting domain xml...
	I0612 21:38:04.951549   80762 main.go:141] libmachine: (old-k8s-version-983302) Creating domain...
	I0612 21:38:06.296150   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting to get IP...
	I0612 21:38:06.296978   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.297465   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.297570   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.297453   81824 retry.go:31] will retry after 256.609938ms: waiting for machine to come up
	I0612 21:38:06.556307   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.556935   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.556967   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.556884   81824 retry.go:31] will retry after 285.754887ms: waiting for machine to come up
	I0612 21:38:06.844674   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.845227   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.845255   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.845171   81824 retry.go:31] will retry after 326.266367ms: waiting for machine to come up
	I0612 21:38:07.172788   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.173414   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.173447   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.173353   81824 retry.go:31] will retry after 393.443927ms: waiting for machine to come up
	I0612 21:38:07.568084   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.568645   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.568673   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.568609   81824 retry.go:31] will retry after 726.66775ms: waiting for machine to come up
	I0612 21:38:06.462860   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:06.466111   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466521   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:06.466551   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466837   80404 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:06.471361   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:06.485595   80404 kubeadm.go:877] updating cluster {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:06.485718   80404 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:06.485761   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:06.528708   80404 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:06.528778   80404 ssh_runner.go:195] Run: which lz4
	I0612 21:38:06.533340   80404 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:06.538076   80404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:06.538115   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:38:08.078495   80404 crio.go:462] duration metric: took 1.545201872s to copy over tarball
	I0612 21:38:08.078573   80404 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:06.917632   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:07.916734   80243 node_ready.go:49] node "default-k8s-diff-port-376087" has status "Ready":"True"
	I0612 21:38:07.916763   80243 node_ready.go:38] duration metric: took 7.504763576s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:07.916775   80243 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:07.924249   80243 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931751   80243 pod_ready.go:92] pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.931773   80243 pod_ready.go:81] duration metric: took 7.493608ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931782   80243 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937804   80243 pod_ready.go:92] pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.937880   80243 pod_ready.go:81] duration metric: took 6.090191ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937904   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:09.944927   80243 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:08.296811   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.297295   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.297319   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.297250   81824 retry.go:31] will retry after 658.540746ms: waiting for machine to come up
	I0612 21:38:08.957164   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.957611   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.957635   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.957576   81824 retry.go:31] will retry after 921.725713ms: waiting for machine to come up
	I0612 21:38:09.880881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:09.881672   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:09.881703   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:09.881604   81824 retry.go:31] will retry after 1.355846361s: waiting for machine to come up
	I0612 21:38:11.238616   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:11.239058   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:11.239094   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:11.238996   81824 retry.go:31] will retry after 1.3469357s: waiting for machine to come up
	I0612 21:38:12.587245   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:12.587747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:12.587785   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:12.587683   81824 retry.go:31] will retry after 1.616666063s: waiting for machine to come up
	I0612 21:38:10.426384   80404 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.347778968s)
	I0612 21:38:10.426418   80404 crio.go:469] duration metric: took 2.347893056s to extract the tarball
	I0612 21:38:10.426427   80404 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:10.472235   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:10.522846   80404 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:38:10.522869   80404 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:38:10.522876   80404 kubeadm.go:928] updating node { 192.168.39.147 8443 v1.30.1 crio true true} ...
	I0612 21:38:10.523007   80404 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-591460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:10.523163   80404 ssh_runner.go:195] Run: crio config
	I0612 21:38:10.577165   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:10.577193   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:10.577209   80404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:10.577244   80404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-591460 NodeName:embed-certs-591460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:38:10.577400   80404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-591460"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:10.577479   80404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:38:10.587499   80404 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:10.587573   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:10.597410   80404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0612 21:38:10.614617   80404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:10.632222   80404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0612 21:38:10.649693   80404 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:10.653639   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:10.666501   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:10.802679   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:10.820975   80404 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460 for IP: 192.168.39.147
	I0612 21:38:10.821001   80404 certs.go:194] generating shared ca certs ...
	I0612 21:38:10.821022   80404 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:10.821187   80404 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:10.821233   80404 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:10.821243   80404 certs.go:256] generating profile certs ...
	I0612 21:38:10.821326   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/client.key
	I0612 21:38:10.821402   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key.3b2e21e0
	I0612 21:38:10.821440   80404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key
	I0612 21:38:10.821575   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:10.821616   80404 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:10.821626   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:10.821655   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:10.821706   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:10.821751   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:10.821812   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:10.822621   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:10.879261   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:10.924352   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:10.961294   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:10.993792   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0612 21:38:11.039515   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:11.063161   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:11.086759   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:38:11.109693   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:11.133083   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:11.155716   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:11.181860   80404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:11.199989   80404 ssh_runner.go:195] Run: openssl version
	I0612 21:38:11.205811   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:11.216640   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221692   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221754   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.227829   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:11.239918   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:11.251648   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256123   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256176   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.261880   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:11.273184   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:11.284832   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289679   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289732   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.295338   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:11.306317   80404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:11.310737   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:11.320403   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:11.327756   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:11.333976   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:11.340200   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:11.346386   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:11.352268   80404 kubeadm.go:391] StartCluster: {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:11.352385   80404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:11.352435   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.390802   80404 cri.go:89] found id: ""
	I0612 21:38:11.390870   80404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:11.402604   80404 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:11.402626   80404 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:11.402630   80404 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:11.402682   80404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:11.413636   80404 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:11.414999   80404 kubeconfig.go:125] found "embed-certs-591460" server: "https://192.168.39.147:8443"
	I0612 21:38:11.417654   80404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:11.427456   80404 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.147
	I0612 21:38:11.427496   80404 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:11.427509   80404 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:11.427559   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.462135   80404 cri.go:89] found id: ""
	I0612 21:38:11.462211   80404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:11.478193   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:11.488816   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:11.488838   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:11.488899   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:11.498079   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:11.498154   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:11.508044   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:11.519721   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:11.519785   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:11.529554   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.538699   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:11.538750   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.548154   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:11.559980   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:11.560053   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:11.569737   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:11.579812   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:11.703454   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.773142   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069644541s)
	I0612 21:38:12.773183   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.991458   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.080268   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.207751   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:13.207934   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:13.708672   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.208389   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.268408   80404 api_server.go:72] duration metric: took 1.060631955s to wait for apiserver process to appear ...
	I0612 21:38:14.268443   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:38:14.268464   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:14.269096   80404 api_server.go:269] stopped: https://192.168.39.147:8443/healthz: Get "https://192.168.39.147:8443/healthz": dial tcp 192.168.39.147:8443: connect: connection refused
	I0612 21:38:10.445507   80243 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.445530   80243 pod_ready.go:81] duration metric: took 2.50760731s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.445542   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450290   80243 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.450310   80243 pod_ready.go:81] duration metric: took 4.759656ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450323   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454909   80243 pod_ready.go:92] pod "kube-proxy-8lrgv" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.454940   80243 pod_ready.go:81] duration metric: took 4.597123ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454951   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:12.587416   80243 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:13.505858   80243 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:13.505884   80243 pod_ready.go:81] duration metric: took 3.050925673s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:13.505896   80243 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:14.206281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:14.206781   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:14.206810   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:14.206716   81824 retry.go:31] will retry after 2.057638604s: waiting for machine to come up
	I0612 21:38:16.266372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:16.266920   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:16.266955   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:16.266858   81824 retry.go:31] will retry after 2.387834661s: waiting for machine to come up
	I0612 21:38:14.769114   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.056504   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.056539   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.056557   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.075356   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.075391   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.268731   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.277080   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.277111   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:17.768638   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.773438   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.773464   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:18.269037   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:18.273939   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:38:18.286895   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:38:18.286922   80404 api_server.go:131] duration metric: took 4.018473342s to wait for apiserver health ...
	I0612 21:38:18.286931   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:18.286937   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:18.288955   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:38:18.290619   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:38:18.305334   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:38:18.336590   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:38:18.351276   80404 system_pods.go:59] 8 kube-system pods found
	I0612 21:38:18.351320   80404 system_pods.go:61] "coredns-7db6d8ff4d-z99cq" [575689b8-3c51-45c8-874c-481e4b9db39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:38:18.351331   80404 system_pods.go:61] "etcd-embed-certs-591460" [190c1552-6bca-41f2-9ea9-e415e1ae9406] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:38:18.351342   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [c0fed28f-1d80-44eb-a66a-3a5b36704882] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:38:18.351350   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [79758f2a-2517-4a76-a3ae-536ac3adf781] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:38:18.351357   80404 system_pods.go:61] "kube-proxy-79kz5" [74ddb284-7cb2-46ec-ab9f-246dbfa0c4ec] Running
	I0612 21:38:18.351372   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [d9916521-fcc1-4bf1-8b03-8a5553f07bd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:38:18.351383   80404 system_pods.go:61] "metrics-server-569cc877fc-bkhxn" [f78482c8-82ea-4dbd-999f-2e4c73c98b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:38:18.351396   80404 system_pods.go:61] "storage-provisioner" [b3b117f7-ac44-4430-afb4-c6991ce1b71d] Running
	I0612 21:38:18.351407   80404 system_pods.go:74] duration metric: took 14.792966ms to wait for pod list to return data ...
	I0612 21:38:18.351419   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:38:18.357736   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:38:18.357769   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:38:18.357786   80404 node_conditions.go:105] duration metric: took 6.360028ms to run NodePressure ...
	I0612 21:38:18.357805   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:18.634312   80404 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638679   80404 kubeadm.go:733] kubelet initialised
	I0612 21:38:18.638700   80404 kubeadm.go:734] duration metric: took 4.362243ms waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638706   80404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:18.643840   80404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.648561   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648585   80404 pod_ready.go:81] duration metric: took 4.721795ms for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.648597   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648606   80404 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.654013   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654036   80404 pod_ready.go:81] duration metric: took 5.419602ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.654046   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654054   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.659445   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659468   80404 pod_ready.go:81] duration metric: took 5.404211ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.659479   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659487   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.741451   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741480   80404 pod_ready.go:81] duration metric: took 81.981354ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.741489   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741495   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140710   80404 pod_ready.go:92] pod "kube-proxy-79kz5" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:19.140734   80404 pod_ready.go:81] duration metric: took 399.230349ms for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140744   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:15.513300   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.013924   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:20.024841   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.656575   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:18.657074   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:18.657111   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:18.657022   81824 retry.go:31] will retry after 3.518256927s: waiting for machine to come up
	I0612 21:38:22.176416   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176901   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has current primary IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176930   80762 main.go:141] libmachine: (old-k8s-version-983302) Found IP for machine: 192.168.50.81
	I0612 21:38:22.176965   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserving static IP address...
	I0612 21:38:22.177385   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserved static IP address: 192.168.50.81
	I0612 21:38:22.177422   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.177435   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting for SSH to be available...
	I0612 21:38:22.177459   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | skip adding static IP to network mk-old-k8s-version-983302 - found existing host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"}
	I0612 21:38:22.177471   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Getting to WaitForSSH function...
	I0612 21:38:22.179728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180130   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.180158   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180273   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH client type: external
	I0612 21:38:22.180334   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa (-rw-------)
	I0612 21:38:22.180368   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:22.180387   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | About to run SSH command:
	I0612 21:38:22.180399   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | exit 0
	I0612 21:38:22.308621   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:22.308979   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetConfigRaw
	I0612 21:38:22.309620   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.312747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313124   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.313155   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313421   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:38:22.313635   80762 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:22.313658   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:22.313884   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.316476   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.316961   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.317014   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.317221   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.317408   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317600   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317775   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.317955   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.318195   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.318207   80762 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:22.431693   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:22.431728   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.431979   80762 buildroot.go:166] provisioning hostname "old-k8s-version-983302"
	I0612 21:38:22.432006   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.432191   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.434830   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435267   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.435300   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435431   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.435598   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435718   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435826   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.436056   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.436237   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.436252   80762 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-983302 && echo "old-k8s-version-983302" | sudo tee /etc/hostname
	I0612 21:38:22.563119   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-983302
	
	I0612 21:38:22.563184   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.565915   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.566315   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566513   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.566704   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.566885   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.567021   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.567243   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.567463   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.567490   80762 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-983302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-983302/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-983302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:22.690443   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:22.690474   80762 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:22.690494   80762 buildroot.go:174] setting up certificates
	I0612 21:38:22.690504   80762 provision.go:84] configureAuth start
	I0612 21:38:22.690514   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.690774   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.693186   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693528   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.693576   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693689   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.695948   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696285   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.696318   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696432   80762 provision.go:143] copyHostCerts
	I0612 21:38:22.696501   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:22.696521   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:22.696583   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:22.696662   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:22.696671   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:22.696693   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:22.696774   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:22.696784   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:22.696803   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:22.696847   80762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-983302 san=[127.0.0.1 192.168.50.81 localhost minikube old-k8s-version-983302]
	I0612 21:38:23.576378   80157 start.go:364] duration metric: took 53.730674695s to acquireMachinesLock for "no-preload-087875"
	I0612 21:38:23.576429   80157 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:23.576436   80157 fix.go:54] fixHost starting: 
	I0612 21:38:23.576844   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:23.576875   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:23.594879   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I0612 21:38:23.595284   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:23.595811   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:38:23.595836   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:23.596201   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:23.596404   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:23.596559   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:38:23.598372   80157 fix.go:112] recreateIfNeeded on no-preload-087875: state=Stopped err=<nil>
	I0612 21:38:23.598399   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	W0612 21:38:23.598558   80157 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:23.600649   80157 out.go:177] * Restarting existing kvm2 VM for "no-preload-087875" ...
	I0612 21:38:21.147354   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.147393   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:22.863618   80762 provision.go:177] copyRemoteCerts
	I0612 21:38:22.863672   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:22.863698   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.866979   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867371   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.867403   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867548   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.867734   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.867904   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.868126   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:22.958350   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:38:22.984409   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:23.009623   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0612 21:38:23.038026   80762 provision.go:87] duration metric: took 347.510898ms to configureAuth
	I0612 21:38:23.038063   80762 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:23.038309   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:38:23.038390   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.041196   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041634   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.041660   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041842   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.042044   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042222   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042410   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.042580   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.042780   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.042799   80762 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:23.324862   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:23.324893   80762 machine.go:97] duration metric: took 1.01124225s to provisionDockerMachine
	I0612 21:38:23.324904   80762 start.go:293] postStartSetup for "old-k8s-version-983302" (driver="kvm2")
	I0612 21:38:23.324913   80762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:23.324928   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.325240   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:23.325274   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.328007   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328343   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.328372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328578   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.328770   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.328939   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.329068   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.416040   80762 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:23.420586   80762 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:23.420607   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:23.420674   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:23.420739   80762 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:23.420823   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:23.432266   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:23.460619   80762 start.go:296] duration metric: took 135.703593ms for postStartSetup
	I0612 21:38:23.460661   80762 fix.go:56] duration metric: took 18.536593686s for fixHost
	I0612 21:38:23.460684   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.463415   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463745   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.463780   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463909   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.464110   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464248   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464378   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.464533   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.464742   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.464754   80762 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:23.576211   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228303.539451044
	
	I0612 21:38:23.576231   80762 fix.go:216] guest clock: 1718228303.539451044
	I0612 21:38:23.576239   80762 fix.go:229] Guest: 2024-06-12 21:38:23.539451044 +0000 UTC Remote: 2024-06-12 21:38:23.460665921 +0000 UTC m=+270.637213069 (delta=78.785123ms)
	I0612 21:38:23.576285   80762 fix.go:200] guest clock delta is within tolerance: 78.785123ms
	I0612 21:38:23.576291   80762 start.go:83] releasing machines lock for "old-k8s-version-983302", held for 18.65227368s
	I0612 21:38:23.576316   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.576617   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:23.579493   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.579881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.579913   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.580120   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580693   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580865   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580952   80762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:23.581005   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.581111   80762 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:23.581141   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.584053   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584262   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584443   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584479   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584587   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584690   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584757   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.584855   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584918   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.584980   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.585067   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.585115   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.585227   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.666055   80762 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:23.688409   80762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:23.848030   80762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:23.855302   80762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:23.855383   80762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:23.874362   80762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:23.874389   80762 start.go:494] detecting cgroup driver to use...
	I0612 21:38:23.874461   80762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:23.893239   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:23.909774   80762 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:23.909844   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:23.926084   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:23.943341   80762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:24.072731   80762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:24.244551   80762 docker.go:233] disabling docker service ...
	I0612 21:38:24.244624   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:24.261862   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:24.277051   80762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:24.426146   80762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:24.560634   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:24.575339   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:24.595965   80762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0612 21:38:24.596043   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.607814   80762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:24.607892   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.619001   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.630982   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.644326   80762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:24.658640   80762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:24.673944   80762 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:24.673994   80762 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:24.693853   80762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:24.709251   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:24.856222   80762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:25.023760   80762 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:25.023842   80762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:25.029449   80762 start.go:562] Will wait 60s for crictl version
	I0612 21:38:25.029522   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:25.033750   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:25.080911   80762 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:25.081018   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.111727   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.145999   80762 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0612 21:38:22.512748   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:24.515486   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.602119   80157 main.go:141] libmachine: (no-preload-087875) Calling .Start
	I0612 21:38:23.602319   80157 main.go:141] libmachine: (no-preload-087875) Ensuring networks are active...
	I0612 21:38:23.603167   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network default is active
	I0612 21:38:23.603533   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network mk-no-preload-087875 is active
	I0612 21:38:23.603887   80157 main.go:141] libmachine: (no-preload-087875) Getting domain xml...
	I0612 21:38:23.604617   80157 main.go:141] libmachine: (no-preload-087875) Creating domain...
	I0612 21:38:24.978550   80157 main.go:141] libmachine: (no-preload-087875) Waiting to get IP...
	I0612 21:38:24.979551   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:24.979945   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:24.980007   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:24.979925   81986 retry.go:31] will retry after 224.557195ms: waiting for machine to come up
	I0612 21:38:25.206441   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.206928   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.206957   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.206875   81986 retry.go:31] will retry after 361.682908ms: waiting for machine to come up
	I0612 21:38:25.570564   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.571139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.571184   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.571089   81986 retry.go:31] will retry after 328.335873ms: waiting for machine to come up
	I0612 21:38:25.901471   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.902020   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.902054   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.901953   81986 retry.go:31] will retry after 505.408325ms: waiting for machine to come up
	I0612 21:38:26.408636   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:26.409139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:26.409167   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:26.409091   81986 retry.go:31] will retry after 749.519426ms: waiting for machine to come up
	I0612 21:38:27.160100   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.160563   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.160611   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.160537   81986 retry.go:31] will retry after 641.037463ms: waiting for machine to come up
	I0612 21:38:25.147420   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:25.151029   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151402   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:25.151432   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151726   80762 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:25.156561   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:25.171243   80762 kubeadm.go:877] updating cluster {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:25.171386   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:38:25.171429   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:25.225872   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:25.225936   80762 ssh_runner.go:195] Run: which lz4
	I0612 21:38:25.230447   80762 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:25.235452   80762 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:25.235485   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0612 21:38:27.033962   80762 crio.go:462] duration metric: took 1.803565745s to copy over tarball
	I0612 21:38:27.034045   80762 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:25.149629   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.651785   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:26.516743   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:29.013751   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.803722   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.804278   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.804316   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.804252   81986 retry.go:31] will retry after 1.184505978s: waiting for machine to come up
	I0612 21:38:28.990221   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:28.990736   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:28.990763   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:28.990709   81986 retry.go:31] will retry after 1.061139219s: waiting for machine to come up
	I0612 21:38:30.054187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:30.054768   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:30.054805   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:30.054718   81986 retry.go:31] will retry after 1.621121981s: waiting for machine to come up
	I0612 21:38:31.677355   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:31.677938   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:31.677966   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:31.677890   81986 retry.go:31] will retry after 2.17746309s: waiting for machine to come up
	I0612 21:38:30.212028   80762 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177947965s)
	I0612 21:38:30.212073   80762 crio.go:469] duration metric: took 3.178080815s to extract the tarball
	I0612 21:38:30.212085   80762 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:30.256957   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:30.297891   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:30.297917   80762 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:30.298025   80762 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.298045   80762 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.298055   80762 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.298021   80762 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0612 21:38:30.298106   80762 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.298062   80762 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.298004   80762 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.298079   80762 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0612 21:38:30.299842   80762 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.299848   80762 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.299843   80762 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.299866   80762 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.299876   80762 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.299905   80762 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.466739   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0612 21:38:30.516078   80762 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0612 21:38:30.516127   80762 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0612 21:38:30.516174   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.520362   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0612 21:38:30.545437   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.563320   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0612 21:38:30.599110   80762 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0612 21:38:30.599155   80762 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.599217   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.603578   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.639450   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0612 21:38:30.649462   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.650602   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.652555   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.656970   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.672136   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.766185   80762 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0612 21:38:30.766233   80762 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.766279   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.778901   80762 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0612 21:38:30.778946   80762 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.778952   80762 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0612 21:38:30.778983   80762 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.778994   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.779041   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.793610   80762 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0612 21:38:30.793650   80762 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.793698   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.807451   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.807482   80762 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0612 21:38:30.807518   80762 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.807458   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.807518   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.807557   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.807559   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.916470   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0612 21:38:30.916564   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0612 21:38:30.916576   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0612 21:38:30.916603   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0612 21:38:30.916646   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.953152   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0612 21:38:31.194046   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:31.341827   80762 cache_images.go:92] duration metric: took 1.043891497s to LoadCachedImages
	W0612 21:38:31.341922   80762 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0612 21:38:31.341937   80762 kubeadm.go:928] updating node { 192.168.50.81 8443 v1.20.0 crio true true} ...
	I0612 21:38:31.342064   80762 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-983302 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:31.342154   80762 ssh_runner.go:195] Run: crio config
	I0612 21:38:31.395673   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:38:31.395706   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:31.395722   80762 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:31.395744   80762 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-983302 NodeName:old-k8s-version-983302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0612 21:38:31.395918   80762 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-983302"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:31.395995   80762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0612 21:38:31.410706   80762 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:31.410785   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:31.425161   80762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0612 21:38:31.445883   80762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:31.463605   80762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0612 21:38:31.482797   80762 ssh_runner.go:195] Run: grep 192.168.50.81	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:31.486974   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:31.499681   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:31.645490   80762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:31.668769   80762 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302 for IP: 192.168.50.81
	I0612 21:38:31.668797   80762 certs.go:194] generating shared ca certs ...
	I0612 21:38:31.668820   80762 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:31.668987   80762 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:31.669061   80762 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:31.669088   80762 certs.go:256] generating profile certs ...
	I0612 21:38:31.669212   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.key
	I0612 21:38:31.669309   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key.1098c83c
	I0612 21:38:31.669373   80762 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key
	I0612 21:38:31.669548   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:31.669598   80762 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:31.669613   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:31.669662   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:31.669723   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:31.669759   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:31.669830   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:31.670835   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:31.717330   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:31.754900   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:31.798099   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:31.839647   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0612 21:38:31.883454   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:31.920765   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:31.953069   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0612 21:38:31.978134   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:32.002475   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:32.027784   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:32.053563   80762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:32.074493   80762 ssh_runner.go:195] Run: openssl version
	I0612 21:38:32.080620   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:32.093531   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098615   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098688   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.104777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:32.116551   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:32.130188   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135197   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135279   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.142777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:32.156051   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:32.169866   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175249   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175340   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.181561   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:32.193430   80762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:32.198235   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:32.204654   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:32.210771   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:32.216966   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:32.223203   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:32.230990   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:32.237290   80762 kubeadm.go:391] StartCluster: {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:32.237446   80762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:32.237503   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.282436   80762 cri.go:89] found id: ""
	I0612 21:38:32.282516   80762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:32.295283   80762 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:32.295313   80762 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:32.295321   80762 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:32.295400   80762 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:32.307483   80762 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:32.308555   80762 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-983302" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:32.309335   80762 kubeconfig.go:62] /home/jenkins/minikube-integration/17779-14199/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-983302" cluster setting kubeconfig missing "old-k8s-version-983302" context setting]
	I0612 21:38:32.310486   80762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:32.397524   80762 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:32.411765   80762 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.81
	I0612 21:38:32.411797   80762 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:32.411807   80762 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:32.411849   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.460009   80762 cri.go:89] found id: ""
	I0612 21:38:32.460078   80762 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:32.481670   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:32.493664   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:32.493684   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:32.493734   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:32.503974   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:32.504044   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:32.515971   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:32.525772   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:32.525832   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:32.537137   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.548539   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:32.548600   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.560401   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:32.570608   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:32.570681   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:32.582763   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:32.594407   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:32.734633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:30.151681   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.658859   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:31.658881   80404 pod_ready.go:81] duration metric: took 12.518130926s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:31.658890   80404 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:33.666360   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.357093   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.857141   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:33.857675   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:33.857702   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:33.857648   81986 retry.go:31] will retry after 2.485654549s: waiting for machine to come up
	I0612 21:38:36.344611   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:36.345117   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:36.345148   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:36.345075   81986 retry.go:31] will retry after 3.560063035s: waiting for machine to come up
	I0612 21:38:33.526337   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.768139   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.896716   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.986708   80762 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:33.986832   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.487194   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.987580   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.486966   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.487534   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.987526   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:37.487035   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.669161   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:35.513787   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.011903   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:39.907588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:39.908051   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:39.908110   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:39.907994   81986 retry.go:31] will retry after 4.524521166s: waiting for machine to come up
	I0612 21:38:37.986904   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.487262   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.986907   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.486895   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.987060   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.487385   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.987049   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.487325   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.987550   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:42.487225   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.665078   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.665731   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.666653   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:40.512741   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.513175   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:45.013451   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.434330   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434850   80157 main.go:141] libmachine: (no-preload-087875) Found IP for machine: 192.168.72.63
	I0612 21:38:44.434883   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has current primary IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434893   80157 main.go:141] libmachine: (no-preload-087875) Reserving static IP address...
	I0612 21:38:44.435324   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.435358   80157 main.go:141] libmachine: (no-preload-087875) Reserved static IP address: 192.168.72.63
	I0612 21:38:44.435378   80157 main.go:141] libmachine: (no-preload-087875) DBG | skip adding static IP to network mk-no-preload-087875 - found existing host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"}
	I0612 21:38:44.435388   80157 main.go:141] libmachine: (no-preload-087875) Waiting for SSH to be available...
	I0612 21:38:44.435397   80157 main.go:141] libmachine: (no-preload-087875) DBG | Getting to WaitForSSH function...
	I0612 21:38:44.437881   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438196   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.438218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438385   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH client type: external
	I0612 21:38:44.438414   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa (-rw-------)
	I0612 21:38:44.438452   80157 main.go:141] libmachine: (no-preload-087875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:44.438469   80157 main.go:141] libmachine: (no-preload-087875) DBG | About to run SSH command:
	I0612 21:38:44.438489   80157 main.go:141] libmachine: (no-preload-087875) DBG | exit 0
	I0612 21:38:44.571149   80157 main.go:141] libmachine: (no-preload-087875) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:44.571499   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetConfigRaw
	I0612 21:38:44.572172   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.574754   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575142   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.575187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575406   80157 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/config.json ...
	I0612 21:38:44.575580   80157 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:44.575595   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:44.575825   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.578584   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579008   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.579030   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579214   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.579394   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579534   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.579924   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.580096   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.580109   80157 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:44.691573   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:44.691609   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.691890   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:38:44.691914   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.692120   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.695218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695697   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.695729   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695783   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.695986   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696200   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696383   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.696572   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.696776   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.696794   80157 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-087875 && echo "no-preload-087875" | sudo tee /etc/hostname
	I0612 21:38:44.821857   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-087875
	
	I0612 21:38:44.821893   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.824821   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825263   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.825295   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825523   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.825740   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.825912   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.826024   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.826187   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.826406   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.826430   80157 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-087875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-087875/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-087875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:44.948871   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:44.948904   80157 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:44.948930   80157 buildroot.go:174] setting up certificates
	I0612 21:38:44.948941   80157 provision.go:84] configureAuth start
	I0612 21:38:44.948954   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.949247   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.952166   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952511   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.952538   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952662   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.955149   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955483   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.955505   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955658   80157 provision.go:143] copyHostCerts
	I0612 21:38:44.955731   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:44.955743   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:44.955807   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:44.955929   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:44.955942   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:44.955975   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:44.956052   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:44.956059   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:44.956078   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:44.956125   80157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.no-preload-087875 san=[127.0.0.1 192.168.72.63 localhost minikube no-preload-087875]
	I0612 21:38:45.138701   80157 provision.go:177] copyRemoteCerts
	I0612 21:38:45.138758   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:45.138781   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.141540   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142011   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.142055   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142199   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.142457   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.142603   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.142765   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.234480   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:45.259043   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0612 21:38:45.290511   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:45.316377   80157 provision.go:87] duration metric: took 367.423709ms to configureAuth
	I0612 21:38:45.316403   80157 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:45.316607   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:45.316684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.319596   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320160   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.320187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320384   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.320598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320778   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320973   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.321203   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.321368   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.321387   80157 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:45.611478   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:45.611511   80157 machine.go:97] duration metric: took 1.035919707s to provisionDockerMachine
	I0612 21:38:45.611523   80157 start.go:293] postStartSetup for "no-preload-087875" (driver="kvm2")
	I0612 21:38:45.611533   80157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:45.611556   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.611843   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:45.611862   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.615071   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615542   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.615582   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615715   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.615889   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.616028   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.616204   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.707710   80157 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:45.712155   80157 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:45.712177   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:45.712235   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:45.712301   80157 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:45.712386   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:45.722654   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:45.747626   80157 start.go:296] duration metric: took 136.091584ms for postStartSetup
	I0612 21:38:45.747666   80157 fix.go:56] duration metric: took 22.171227252s for fixHost
	I0612 21:38:45.747685   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.750588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.750972   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.750999   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.751231   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.751443   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751773   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.752005   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.752181   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.752195   80157 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:45.864042   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228325.837473906
	
	I0612 21:38:45.864068   80157 fix.go:216] guest clock: 1718228325.837473906
	I0612 21:38:45.864079   80157 fix.go:229] Guest: 2024-06-12 21:38:45.837473906 +0000 UTC Remote: 2024-06-12 21:38:45.747669277 +0000 UTC m=+358.493088442 (delta=89.804629ms)
	I0612 21:38:45.864106   80157 fix.go:200] guest clock delta is within tolerance: 89.804629ms
	I0612 21:38:45.864114   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 22.287706082s
	I0612 21:38:45.864152   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.864448   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:45.867230   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867603   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.867633   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867768   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868293   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868453   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868535   80157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:45.868575   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.868663   80157 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:45.868681   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.871218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871489   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871678   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.871719   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871915   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872061   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.872085   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872109   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.872240   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872246   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872522   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872529   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.872692   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872868   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.953249   80157 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:45.976778   80157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:46.124511   80157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:46.130509   80157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:46.130575   80157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:46.149670   80157 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:46.149691   80157 start.go:494] detecting cgroup driver to use...
	I0612 21:38:46.149755   80157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:46.167865   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:46.182896   80157 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:46.182951   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:46.197058   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:46.211517   80157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:46.331986   80157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:46.500675   80157 docker.go:233] disabling docker service ...
	I0612 21:38:46.500745   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:46.516858   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:46.530617   80157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:46.674917   80157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:46.810090   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:46.825079   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:46.843895   80157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:46.843963   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.854170   80157 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:46.854245   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.864699   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.875057   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.886063   80157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:46.897688   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.908984   80157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.926803   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.939373   80157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:46.948868   80157 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:46.948922   80157 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:46.963593   80157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:46.973735   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:47.108669   80157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:47.249938   80157 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:47.250044   80157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:47.255480   80157 start.go:562] Will wait 60s for crictl version
	I0612 21:38:47.255556   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.259730   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:47.303074   80157 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:47.303187   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.332225   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.363628   80157 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:42.987579   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.487465   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.987265   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.487935   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.987399   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.487793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.986898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.486985   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.986848   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.486947   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.164573   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.165711   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.512195   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.512366   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.365068   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:47.367703   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368079   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:47.368103   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368325   80157 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:47.372608   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:47.386411   80157 kubeadm.go:877] updating cluster {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:47.386750   80157 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:47.386796   80157 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:47.422165   80157 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:47.422189   80157 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:47.422227   80157 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.422280   80157 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.422355   80157 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.422370   80157 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.422311   80157 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.422347   80157 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.422318   80157 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0612 21:38:47.422599   80157 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.423599   80157 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0612 21:38:47.423610   80157 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.423612   80157 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.423630   80157 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.423626   80157 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.423699   80157 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.423737   80157 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.423720   80157 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.556807   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0612 21:38:47.557424   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.561887   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.569402   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.571880   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.576879   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.587848   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.759890   80157 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0612 21:38:47.759926   80157 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.759947   80157 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0612 21:38:47.759973   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.759976   80157 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0612 21:38:47.760006   80157 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.760015   80157 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0612 21:38:47.759977   80157 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.760061   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760063   80157 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.760075   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760073   80157 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0612 21:38:47.760091   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760101   80157 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.760164   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.766878   80157 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0612 21:38:47.766905   80157 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.766943   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.777168   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.777197   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.778459   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.779057   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.882668   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0612 21:38:47.882770   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.902416   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0612 21:38:47.902532   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:47.917388   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0612 21:38:47.917473   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0612 21:38:47.917501   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:47.917528   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:47.917545   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:47.917500   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0612 21:38:47.917558   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917594   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917502   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:47.917559   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0612 21:38:47.929251   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0612 21:38:47.929299   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0612 21:38:47.929308   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0612 21:38:48.312589   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.713720   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1: (2.796151375s)
	I0612 21:38:50.713767   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.796263274s)
	I0612 21:38:50.713901   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.401254109s)
	I0612 21:38:50.713921   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.713966   80157 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0612 21:38:50.713987   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.714017   80157 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.714063   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.987863   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.487299   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.986886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.486972   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.987859   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.487034   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.987724   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.486948   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:52.487668   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.665638   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.665855   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:51.512765   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:54.011870   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.169682   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.455668553s)
	I0612 21:38:53.169705   80157 ssh_runner.go:235] Completed: which crictl: (2.455619981s)
	I0612 21:38:53.169714   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0612 21:38:53.169741   80157 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.169759   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:53.169784   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.216895   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0612 21:38:53.217020   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:38:57.220343   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.050521066s)
	I0612 21:38:57.220376   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0612 21:38:57.220397   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220444   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220443   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (4.003396955s)
	I0612 21:38:57.220487   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0612 21:38:52.987635   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.487500   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.987860   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.487855   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.986868   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.487259   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.987902   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.487535   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.987269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:57.487542   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.166299   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.665085   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:56.012847   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.557142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.682288   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.46182102s)
	I0612 21:38:58.682313   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0612 21:38:58.682337   80157 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:58.682376   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:39:00.576373   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.893964365s)
	I0612 21:39:00.576412   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0612 21:39:00.576443   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:39:00.576504   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:57.987222   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.486976   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.986913   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.487269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.987289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.987690   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.487283   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.987541   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:02.487589   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.667732   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.165317   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:01.012684   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.015111   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:02.445930   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.86940281s)
	I0612 21:39:02.445960   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0612 21:39:02.445994   80157 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:02.446071   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:03.393330   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0612 21:39:03.393375   80157 cache_images.go:123] Successfully loaded all cached images
	I0612 21:39:03.393382   80157 cache_images.go:92] duration metric: took 15.9711807s to LoadCachedImages
	I0612 21:39:03.393397   80157 kubeadm.go:928] updating node { 192.168.72.63 8443 v1.30.1 crio true true} ...
	I0612 21:39:03.393543   80157 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-087875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:39:03.393658   80157 ssh_runner.go:195] Run: crio config
	I0612 21:39:03.448859   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:03.448884   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:03.448901   80157 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:39:03.448930   80157 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.63 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-087875 NodeName:no-preload-087875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:39:03.449103   80157 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-087875"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:39:03.449181   80157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:39:03.462756   80157 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:39:03.462825   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:39:03.472653   80157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0612 21:39:03.491567   80157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:39:03.509239   80157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0612 21:39:03.527802   80157 ssh_runner.go:195] Run: grep 192.168.72.63	control-plane.minikube.internal$ /etc/hosts
	I0612 21:39:03.531523   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:39:03.543748   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:39:03.666376   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:39:03.683563   80157 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875 for IP: 192.168.72.63
	I0612 21:39:03.683587   80157 certs.go:194] generating shared ca certs ...
	I0612 21:39:03.683606   80157 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:39:03.683766   80157 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:39:03.683816   80157 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:39:03.683831   80157 certs.go:256] generating profile certs ...
	I0612 21:39:03.683927   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/client.key
	I0612 21:39:03.684010   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key.13709275
	I0612 21:39:03.684066   80157 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key
	I0612 21:39:03.684217   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:39:03.684259   80157 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:39:03.684272   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:39:03.684318   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:39:03.684364   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:39:03.684395   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:39:03.684455   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:39:03.685098   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:39:03.732817   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:39:03.771449   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:39:03.800774   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:39:03.831845   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 21:39:03.862000   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 21:39:03.901036   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:39:03.925025   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:39:03.950862   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:39:03.974222   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:39:04.002698   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:39:04.028173   80157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:39:04.044685   80157 ssh_runner.go:195] Run: openssl version
	I0612 21:39:04.050600   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:39:04.061893   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066371   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066424   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.072463   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:39:04.083929   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:39:04.094777   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099380   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099435   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.105125   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:39:04.116191   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:39:04.127408   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132234   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132315   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.138401   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:39:04.149542   80157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:39:04.154133   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:39:04.160171   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:39:04.166410   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:39:04.172650   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:39:04.178506   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:39:04.184375   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:39:04.190412   80157 kubeadm.go:391] StartCluster: {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:39:04.190524   80157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:39:04.190584   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.235297   80157 cri.go:89] found id: ""
	I0612 21:39:04.235362   80157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:39:04.246400   80157 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:39:04.246429   80157 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:39:04.246449   80157 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:39:04.246499   80157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:39:04.257137   80157 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:39:04.258277   80157 kubeconfig.go:125] found "no-preload-087875" server: "https://192.168.72.63:8443"
	I0612 21:39:04.260656   80157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:39:04.270637   80157 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.63
	I0612 21:39:04.270666   80157 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:39:04.270675   80157 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:39:04.270730   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.316487   80157 cri.go:89] found id: ""
	I0612 21:39:04.316550   80157 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:39:04.334814   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:39:04.346430   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:39:04.346451   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:39:04.346500   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:39:04.356362   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:39:04.356417   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:39:04.366999   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:39:04.378005   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:39:04.378061   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:39:04.388052   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.397130   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:39:04.397185   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.407053   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:39:04.416338   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:39:04.416395   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:39:04.426475   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:39:04.436852   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:04.565452   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.461610   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.676493   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.767236   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.870855   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:39:05.870960   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.372034   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.871680   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.906242   80157 api_server.go:72] duration metric: took 1.035387498s to wait for apiserver process to appear ...
	I0612 21:39:06.906273   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:39:06.906296   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:06.906883   80157 api_server.go:269] stopped: https://192.168.72.63:8443/healthz: Get "https://192.168.72.63:8443/healthz": dial tcp 192.168.72.63:8443: connect: connection refused
	I0612 21:39:02.987853   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.487382   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.987303   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.487852   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.987464   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.486928   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.987660   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:07.487497   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.166502   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.665452   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:09.665766   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:05.512792   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:08.012392   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:10.014073   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.407227   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.589285   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:39:09.589319   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:39:09.589336   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.726716   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.726753   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:09.907032   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.917718   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.917746   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.406997   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.412127   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:10.412156   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.906700   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.911262   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:39:10.918778   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:39:10.918813   80157 api_server.go:131] duration metric: took 4.012531107s to wait for apiserver health ...
	I0612 21:39:10.918824   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:10.918832   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:10.921012   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:39:10.922401   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:39:10.948209   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:39:10.974530   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:39:10.986054   80157 system_pods.go:59] 8 kube-system pods found
	I0612 21:39:10.986091   80157 system_pods.go:61] "coredns-7db6d8ff4d-sh68b" [17691219-bfda-443b-8049-e6e966aadb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:39:10.986102   80157 system_pods.go:61] "etcd-no-preload-087875" [3048b12a-4354-45fd-99c7-d2a84035e102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:39:10.986114   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [0f39a5fd-1a64-479f-bb28-c19bc10b7ed3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:39:10.986127   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [62cc49b8-b05f-4371-aa17-bea17d08d2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:39:10.986141   80157 system_pods.go:61] "kube-proxy-htv9h" [e3eb4693-7896-4dd2-98b8-91f06b028a1e] Running
	I0612 21:39:10.986158   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [ef833b9d-75ca-43bd-b196-30594775b174] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:39:10.986170   80157 system_pods.go:61] "metrics-server-569cc877fc-d5mj6" [79ba2aad-c942-4162-b69a-5c7dd138a618] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:39:10.986178   80157 system_pods.go:61] "storage-provisioner" [5793c778-1a5c-4cfe-924a-b85b72df53cd] Running
	I0612 21:39:10.986187   80157 system_pods.go:74] duration metric: took 11.634011ms to wait for pod list to return data ...
	I0612 21:39:10.986199   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:39:10.992801   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:39:10.992843   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:39:10.992856   80157 node_conditions.go:105] duration metric: took 6.648025ms to run NodePressure ...
	I0612 21:39:10.992878   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:11.263413   80157 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271758   80157 kubeadm.go:733] kubelet initialised
	I0612 21:39:11.271781   80157 kubeadm.go:734] duration metric: took 8.347232ms waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271789   80157 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:39:11.277940   80157 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:07.987732   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.486974   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.486941   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.986929   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.487754   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.987685   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.486910   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.987457   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.486873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.165604   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.166986   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.029928   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.512085   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:13.287555   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:15.786345   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.987394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.486915   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.987880   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.486881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.986951   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.487462   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.986850   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.487213   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.987066   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:17.487882   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.666123   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.666354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:16.512936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:19.013463   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.285110   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:20.788396   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.284869   80157 pod_ready.go:92] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:21.284902   80157 pod_ready.go:81] duration metric: took 10.006929439s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:21.284916   80157 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:17.987273   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.987836   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.487622   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.987381   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.487005   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.987638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.487670   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.987552   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:22.487438   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.665272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.512836   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:24.014108   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.291502   80157 pod_ready.go:102] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:25.791813   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.791842   80157 pod_ready.go:81] duration metric: took 4.506916362s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.791854   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796901   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.796928   80157 pod_ready.go:81] duration metric: took 5.066599ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796939   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801550   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.801571   80157 pod_ready.go:81] duration metric: took 4.624771ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801580   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806178   80157 pod_ready.go:92] pod "kube-proxy-htv9h" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.806195   80157 pod_ready.go:81] duration metric: took 4.609956ms for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806204   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809883   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.809902   80157 pod_ready.go:81] duration metric: took 3.691999ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809914   80157 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:22.987165   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.487122   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.987804   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.487583   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.987647   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.487126   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.987251   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.987044   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:27.486911   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.668272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:28.164809   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:26.513220   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:29.013047   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.817352   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:30.315600   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.487496   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.987166   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.487892   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.987787   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.487315   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.987933   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.487255   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:32.487881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.165900   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.167795   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.665939   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:31.013473   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:33.015281   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.316680   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.317063   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:36.816905   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.987267   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.487678   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.987296   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:33.987371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:34.028670   80762 cri.go:89] found id: ""
	I0612 21:39:34.028699   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.028710   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:34.028717   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:34.028778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:34.068371   80762 cri.go:89] found id: ""
	I0612 21:39:34.068400   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.068412   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:34.068419   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:34.068485   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:34.104605   80762 cri.go:89] found id: ""
	I0612 21:39:34.104634   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.104643   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:34.104650   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:34.104745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:34.150301   80762 cri.go:89] found id: ""
	I0612 21:39:34.150327   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.150335   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:34.150341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:34.150396   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:34.191426   80762 cri.go:89] found id: ""
	I0612 21:39:34.191462   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.191475   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:34.191484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:34.191562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:34.228483   80762 cri.go:89] found id: ""
	I0612 21:39:34.228523   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.228535   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:34.228543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:34.228653   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:34.262834   80762 cri.go:89] found id: ""
	I0612 21:39:34.262863   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.262873   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:34.262881   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:34.262944   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:34.298283   80762 cri.go:89] found id: ""
	I0612 21:39:34.298312   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.298321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:34.298330   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:34.298340   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:34.350889   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:34.350918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:34.365264   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:34.365289   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:34.508130   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:34.508162   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:34.508180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:34.572036   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:34.572076   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.114371   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:37.127410   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:37.127492   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:37.168684   80762 cri.go:89] found id: ""
	I0612 21:39:37.168705   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.168714   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:37.168723   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:37.168798   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:37.208765   80762 cri.go:89] found id: ""
	I0612 21:39:37.208797   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.208808   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:37.208815   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:37.208875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:37.266245   80762 cri.go:89] found id: ""
	I0612 21:39:37.266270   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.266277   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:37.266283   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:37.266331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:37.313557   80762 cri.go:89] found id: ""
	I0612 21:39:37.313586   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.313597   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:37.313606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:37.313677   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:37.353292   80762 cri.go:89] found id: ""
	I0612 21:39:37.353318   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.353325   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:37.353332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:37.353389   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:37.391940   80762 cri.go:89] found id: ""
	I0612 21:39:37.391974   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.391984   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:37.392015   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:37.392078   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:37.432133   80762 cri.go:89] found id: ""
	I0612 21:39:37.432154   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.432166   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:37.432174   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:37.432228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:37.468274   80762 cri.go:89] found id: ""
	I0612 21:39:37.468302   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.468310   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:37.468328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:37.468347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:37.543904   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:37.543941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.586957   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:37.586982   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:37.641247   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:37.641288   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:37.657076   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:37.657101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:37.729279   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:37.165427   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.166383   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:35.512174   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:37.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.012806   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.317119   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:41.817268   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.229638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:40.243825   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:40.243889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:40.282795   80762 cri.go:89] found id: ""
	I0612 21:39:40.282821   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.282829   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:40.282834   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:40.282879   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:40.320211   80762 cri.go:89] found id: ""
	I0612 21:39:40.320236   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.320246   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:40.320252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:40.320338   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:40.356270   80762 cri.go:89] found id: ""
	I0612 21:39:40.356292   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.356300   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:40.356306   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:40.356353   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:40.394667   80762 cri.go:89] found id: ""
	I0612 21:39:40.394691   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.394699   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:40.394704   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:40.394751   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:40.432765   80762 cri.go:89] found id: ""
	I0612 21:39:40.432794   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.432804   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:40.432811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:40.432883   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:40.472347   80762 cri.go:89] found id: ""
	I0612 21:39:40.472386   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.472406   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:40.472414   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:40.472477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:40.508414   80762 cri.go:89] found id: ""
	I0612 21:39:40.508445   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.508456   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:40.508464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:40.508521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:40.546938   80762 cri.go:89] found id: ""
	I0612 21:39:40.546964   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.546972   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:40.546981   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:40.546993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:40.621356   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:40.621380   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:40.621398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:40.703830   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:40.703865   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:40.744915   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:40.744965   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:40.798883   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:40.798920   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:41.167469   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.667403   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:42.512351   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.512639   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.317053   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.317350   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.315905   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:43.330150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:43.330221   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:43.377307   80762 cri.go:89] found id: ""
	I0612 21:39:43.377337   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.377347   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:43.377362   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:43.377426   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:43.412608   80762 cri.go:89] found id: ""
	I0612 21:39:43.412638   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.412648   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:43.412654   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:43.412718   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:43.446716   80762 cri.go:89] found id: ""
	I0612 21:39:43.446746   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.446755   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:43.446762   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:43.446823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:43.484607   80762 cri.go:89] found id: ""
	I0612 21:39:43.484636   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.484647   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:43.484655   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:43.484700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:43.522400   80762 cri.go:89] found id: ""
	I0612 21:39:43.522427   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.522438   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:43.522445   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:43.522529   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:43.559121   80762 cri.go:89] found id: ""
	I0612 21:39:43.559147   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.559163   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:43.559211   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:43.559292   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:43.595886   80762 cri.go:89] found id: ""
	I0612 21:39:43.595919   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.595937   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:43.595945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:43.596011   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:43.638549   80762 cri.go:89] found id: ""
	I0612 21:39:43.638573   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.638583   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:43.638594   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:43.638609   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:43.705300   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:43.705338   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:43.723246   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:43.723281   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:43.807735   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:43.807760   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:43.807870   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:43.882971   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:43.883017   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.421476   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:46.434447   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:46.434532   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:46.470710   80762 cri.go:89] found id: ""
	I0612 21:39:46.470745   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.470758   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:46.470765   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:46.470828   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:46.504843   80762 cri.go:89] found id: ""
	I0612 21:39:46.504871   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.504878   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:46.504884   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:46.504941   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:46.542937   80762 cri.go:89] found id: ""
	I0612 21:39:46.542965   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.542973   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:46.542979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:46.543035   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:46.581098   80762 cri.go:89] found id: ""
	I0612 21:39:46.581124   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.581133   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:46.581143   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:46.581189   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:46.617289   80762 cri.go:89] found id: ""
	I0612 21:39:46.617319   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.617329   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:46.617337   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:46.617402   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:46.651012   80762 cri.go:89] found id: ""
	I0612 21:39:46.651045   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.651057   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:46.651070   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:46.651141   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:46.688344   80762 cri.go:89] found id: ""
	I0612 21:39:46.688370   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.688379   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:46.688388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:46.688451   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:46.724349   80762 cri.go:89] found id: ""
	I0612 21:39:46.724374   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.724382   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:46.724390   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:46.724404   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:46.797866   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:46.797894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:46.797912   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:46.887520   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:46.887557   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.928143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:46.928182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:46.981416   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:46.981451   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:46.164845   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.166925   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.513519   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.016041   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.816335   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:50.816407   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.497028   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:49.510077   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:49.510147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:49.544313   80762 cri.go:89] found id: ""
	I0612 21:39:49.544349   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.544359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:49.544365   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:49.544416   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:49.580220   80762 cri.go:89] found id: ""
	I0612 21:39:49.580248   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.580256   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:49.580262   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:49.580316   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:49.619582   80762 cri.go:89] found id: ""
	I0612 21:39:49.619607   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.619615   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:49.619620   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:49.619692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:49.656453   80762 cri.go:89] found id: ""
	I0612 21:39:49.656479   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.656487   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:49.656493   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:49.656557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:49.694285   80762 cri.go:89] found id: ""
	I0612 21:39:49.694318   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.694330   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:49.694338   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:49.694417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:49.731100   80762 cri.go:89] found id: ""
	I0612 21:39:49.731127   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.731135   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:49.731140   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:49.731209   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:49.767709   80762 cri.go:89] found id: ""
	I0612 21:39:49.767731   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.767738   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:49.767744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:49.767787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:49.801231   80762 cri.go:89] found id: ""
	I0612 21:39:49.801265   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.801283   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:49.801294   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:49.801309   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:49.848500   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:49.848542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:49.900084   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:49.900121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:49.916208   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:49.916234   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:49.983283   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:49.983310   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:49.983325   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:52.566884   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:52.580400   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:52.580476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:52.615922   80762 cri.go:89] found id: ""
	I0612 21:39:52.615957   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.615970   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:52.615978   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:52.616038   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:52.657316   80762 cri.go:89] found id: ""
	I0612 21:39:52.657348   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.657356   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:52.657362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:52.657417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:52.692426   80762 cri.go:89] found id: ""
	I0612 21:39:52.692459   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.692470   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:52.692478   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:52.692542   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:52.726800   80762 cri.go:89] found id: ""
	I0612 21:39:52.726835   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.726848   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:52.726856   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:52.726921   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:52.764283   80762 cri.go:89] found id: ""
	I0612 21:39:52.764314   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.764326   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:52.764341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:52.764395   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:52.802279   80762 cri.go:89] found id: ""
	I0612 21:39:52.802311   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.802324   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:52.802331   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:52.802385   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:52.841433   80762 cri.go:89] found id: ""
	I0612 21:39:52.841466   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.841477   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:52.841484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:52.841546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:50.667322   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.165294   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:51.016137   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.019373   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.818876   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.316845   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.881417   80762 cri.go:89] found id: ""
	I0612 21:39:52.881441   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.881449   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:52.881457   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:52.881468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:52.936228   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:52.936262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:52.950688   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:52.950718   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:53.025101   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:53.025122   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:53.025138   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:53.114986   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:53.115031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:55.653893   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:55.668983   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:55.669047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:55.708445   80762 cri.go:89] found id: ""
	I0612 21:39:55.708475   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.708486   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:55.708494   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:55.708558   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:55.745158   80762 cri.go:89] found id: ""
	I0612 21:39:55.745185   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.745195   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:55.745204   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:55.745270   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:55.785322   80762 cri.go:89] found id: ""
	I0612 21:39:55.785344   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.785363   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:55.785370   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:55.785442   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:55.822371   80762 cri.go:89] found id: ""
	I0612 21:39:55.822397   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.822408   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:55.822416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:55.822484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:55.856866   80762 cri.go:89] found id: ""
	I0612 21:39:55.856888   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.856895   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:55.856900   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:55.856954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:55.891618   80762 cri.go:89] found id: ""
	I0612 21:39:55.891648   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.891660   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:55.891668   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:55.891731   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:55.927483   80762 cri.go:89] found id: ""
	I0612 21:39:55.927504   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.927513   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:55.927519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:55.927572   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:55.963546   80762 cri.go:89] found id: ""
	I0612 21:39:55.963572   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.963584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:55.963597   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:55.963616   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:56.037421   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:56.037442   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:56.037453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:56.112148   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:56.112185   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:56.163359   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:56.163389   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:56.217109   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:56.217144   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:55.166499   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.665517   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.665625   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.513267   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.015558   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.317149   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.320306   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:01.815855   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.733278   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:58.746890   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:58.746951   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:58.785222   80762 cri.go:89] found id: ""
	I0612 21:39:58.785252   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.785263   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:58.785269   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:58.785343   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:58.824421   80762 cri.go:89] found id: ""
	I0612 21:39:58.824448   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.824455   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:58.824461   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:58.824521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:58.863626   80762 cri.go:89] found id: ""
	I0612 21:39:58.863658   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.863669   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:58.863728   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:58.863818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:58.904040   80762 cri.go:89] found id: ""
	I0612 21:39:58.904064   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.904073   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:58.904080   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:58.904147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:58.937508   80762 cri.go:89] found id: ""
	I0612 21:39:58.937543   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.937557   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:58.937565   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:58.937632   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:58.974283   80762 cri.go:89] found id: ""
	I0612 21:39:58.974311   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.974322   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:58.974330   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:58.974383   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:59.009954   80762 cri.go:89] found id: ""
	I0612 21:39:59.009987   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.009999   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:59.010007   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:59.010072   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:59.051911   80762 cri.go:89] found id: ""
	I0612 21:39:59.051935   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.051943   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:59.051951   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:59.051961   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:59.102911   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:59.102942   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:59.116576   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:59.116608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:59.189590   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:59.189619   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:59.189634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:59.270192   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:59.270232   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.820872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:01.834916   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:01.835000   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:01.870526   80762 cri.go:89] found id: ""
	I0612 21:40:01.870560   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.870572   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:01.870579   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:01.870642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:01.909581   80762 cri.go:89] found id: ""
	I0612 21:40:01.909614   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.909626   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:01.909633   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:01.909727   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:01.947944   80762 cri.go:89] found id: ""
	I0612 21:40:01.947976   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.947988   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:01.947995   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:01.948059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:01.985745   80762 cri.go:89] found id: ""
	I0612 21:40:01.985781   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.985793   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:01.985800   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:01.985860   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:02.023716   80762 cri.go:89] found id: ""
	I0612 21:40:02.023741   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.023749   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:02.023754   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:02.023801   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:02.059136   80762 cri.go:89] found id: ""
	I0612 21:40:02.059168   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.059203   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:02.059212   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:02.059283   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:02.104520   80762 cri.go:89] found id: ""
	I0612 21:40:02.104544   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.104552   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:02.104558   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:02.104618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:02.146130   80762 cri.go:89] found id: ""
	I0612 21:40:02.146164   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.146176   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:02.146187   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:02.146202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:02.199672   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:02.199710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:02.215224   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:02.215256   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:02.290030   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:02.290057   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:02.290072   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:02.374579   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:02.374615   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.667390   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.165253   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:00.512229   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:02.513298   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.018848   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:03.816610   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.818990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.915345   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:04.928323   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:04.928404   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:04.963267   80762 cri.go:89] found id: ""
	I0612 21:40:04.963297   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.963310   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:04.963319   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:04.963386   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:04.998378   80762 cri.go:89] found id: ""
	I0612 21:40:04.998409   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.998420   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:04.998426   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:04.998498   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:05.038094   80762 cri.go:89] found id: ""
	I0612 21:40:05.038118   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.038126   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:05.038132   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:05.038181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:05.074331   80762 cri.go:89] found id: ""
	I0612 21:40:05.074366   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.074379   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:05.074386   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:05.074462   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:05.109332   80762 cri.go:89] found id: ""
	I0612 21:40:05.109359   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.109368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:05.109373   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:05.109423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:05.143875   80762 cri.go:89] found id: ""
	I0612 21:40:05.143908   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.143918   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:05.143926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:05.143990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:05.183695   80762 cri.go:89] found id: ""
	I0612 21:40:05.183724   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.183731   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:05.183737   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:05.183792   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:05.222852   80762 cri.go:89] found id: ""
	I0612 21:40:05.222878   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.222887   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:05.222895   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:05.222907   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:05.262661   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:05.262687   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:05.315563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:05.315593   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:05.332128   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:05.332163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:05.411675   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:05.411699   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:05.411712   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:06.665324   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.667163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.512587   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.012843   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.316990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.816093   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.991930   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:08.005743   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:08.005807   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:08.041685   80762 cri.go:89] found id: ""
	I0612 21:40:08.041714   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.041724   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:08.041732   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:08.041791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:08.080875   80762 cri.go:89] found id: ""
	I0612 21:40:08.080905   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.080916   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:08.080925   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:08.080993   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:08.117290   80762 cri.go:89] found id: ""
	I0612 21:40:08.117316   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.117323   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:08.117329   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:08.117387   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:08.154345   80762 cri.go:89] found id: ""
	I0612 21:40:08.154376   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.154387   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:08.154395   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:08.154459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:08.192913   80762 cri.go:89] found id: ""
	I0612 21:40:08.192947   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.192957   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:08.192969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:08.193033   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:08.235732   80762 cri.go:89] found id: ""
	I0612 21:40:08.235764   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.235775   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:08.235782   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:08.235853   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:08.274282   80762 cri.go:89] found id: ""
	I0612 21:40:08.274306   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.274314   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:08.274320   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:08.274366   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:08.314585   80762 cri.go:89] found id: ""
	I0612 21:40:08.314608   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.314619   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:08.314628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:08.314641   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:08.331693   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:08.331725   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:08.414541   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:08.414565   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:08.414584   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:08.496428   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:08.496460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:08.546991   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:08.547020   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.099778   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:11.113450   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:11.113539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:11.150426   80762 cri.go:89] found id: ""
	I0612 21:40:11.150451   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.150459   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:11.150464   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:11.150524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:11.189931   80762 cri.go:89] found id: ""
	I0612 21:40:11.189958   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.189967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:11.189972   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:11.190031   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:11.228116   80762 cri.go:89] found id: ""
	I0612 21:40:11.228144   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.228154   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:11.228161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:11.228243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:11.268639   80762 cri.go:89] found id: ""
	I0612 21:40:11.268664   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.268672   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:11.268678   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:11.268723   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:11.306077   80762 cri.go:89] found id: ""
	I0612 21:40:11.306105   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.306116   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:11.306123   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:11.306187   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:11.344360   80762 cri.go:89] found id: ""
	I0612 21:40:11.344388   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.344399   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:11.344418   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:11.344475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:11.382906   80762 cri.go:89] found id: ""
	I0612 21:40:11.382937   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.382948   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:11.382957   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:11.383027   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:11.418388   80762 cri.go:89] found id: ""
	I0612 21:40:11.418419   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.418429   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:11.418439   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:11.418453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:11.432204   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:11.432241   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:11.508219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:11.508251   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:11.508263   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:11.593021   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:11.593058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:11.634056   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:11.634087   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.165384   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:13.170153   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.013303   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.013454   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.817129   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:15.316929   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.187831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:14.203153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:14.203248   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:14.239693   80762 cri.go:89] found id: ""
	I0612 21:40:14.239716   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.239723   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:14.239729   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:14.239827   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:14.273206   80762 cri.go:89] found id: ""
	I0612 21:40:14.273234   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.273244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:14.273251   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:14.273313   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:14.315512   80762 cri.go:89] found id: ""
	I0612 21:40:14.315592   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.315610   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:14.315618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:14.315679   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:14.352454   80762 cri.go:89] found id: ""
	I0612 21:40:14.352483   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.352496   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:14.352504   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:14.352554   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:14.387845   80762 cri.go:89] found id: ""
	I0612 21:40:14.387872   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.387880   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:14.387886   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:14.387935   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:14.423220   80762 cri.go:89] found id: ""
	I0612 21:40:14.423245   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.423254   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:14.423259   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:14.423322   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:14.457744   80762 cri.go:89] found id: ""
	I0612 21:40:14.457772   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.457784   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:14.457791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:14.457849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:14.493580   80762 cri.go:89] found id: ""
	I0612 21:40:14.493611   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.493622   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:14.493633   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:14.493669   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:14.566867   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:14.566894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:14.566913   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:14.645916   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:14.645959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:14.690232   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:14.690262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:14.741532   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:14.741576   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.257886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:17.271841   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:17.271910   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:17.309628   80762 cri.go:89] found id: ""
	I0612 21:40:17.309654   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.309667   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:17.309675   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:17.309746   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:17.346671   80762 cri.go:89] found id: ""
	I0612 21:40:17.346752   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.346769   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:17.346777   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:17.346842   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:17.381145   80762 cri.go:89] found id: ""
	I0612 21:40:17.381169   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.381177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:17.381184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:17.381241   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:17.417159   80762 cri.go:89] found id: ""
	I0612 21:40:17.417179   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.417187   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:17.417194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:17.417254   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:17.453189   80762 cri.go:89] found id: ""
	I0612 21:40:17.453213   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.453220   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:17.453226   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:17.453284   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:17.510988   80762 cri.go:89] found id: ""
	I0612 21:40:17.511012   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.511019   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:17.511026   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:17.511083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:17.548141   80762 cri.go:89] found id: ""
	I0612 21:40:17.548166   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.548176   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:17.548182   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:17.548243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:17.584591   80762 cri.go:89] found id: ""
	I0612 21:40:17.584619   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.584627   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:17.584637   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:17.584647   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:17.628627   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:17.628662   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:17.682792   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:17.682823   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.697921   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:17.697959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:17.770591   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:17.770617   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:17.770633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:15.665831   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.165059   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:16.014130   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:17.817443   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.316576   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.350181   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:20.363671   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:20.363743   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:20.399858   80762 cri.go:89] found id: ""
	I0612 21:40:20.399889   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.399896   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:20.399903   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:20.399963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:20.437715   80762 cri.go:89] found id: ""
	I0612 21:40:20.437755   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.437766   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:20.437776   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:20.437843   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:20.472525   80762 cri.go:89] found id: ""
	I0612 21:40:20.472558   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.472573   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:20.472582   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:20.472642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:20.507923   80762 cri.go:89] found id: ""
	I0612 21:40:20.507948   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.507959   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:20.507966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:20.508029   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:20.545471   80762 cri.go:89] found id: ""
	I0612 21:40:20.545502   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.545512   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:20.545519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:20.545586   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:20.583793   80762 cri.go:89] found id: ""
	I0612 21:40:20.583829   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.583839   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:20.583846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:20.583912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:20.624399   80762 cri.go:89] found id: ""
	I0612 21:40:20.624438   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.624449   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:20.624467   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:20.624530   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:20.665158   80762 cri.go:89] found id: ""
	I0612 21:40:20.665184   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.665194   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:20.665203   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:20.665217   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:20.743062   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:20.743101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:20.792573   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:20.792613   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:20.847998   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:20.848033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:20.863447   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:20.863497   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:20.938020   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:20.165455   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.665110   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.665262   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.513556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.014750   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.316950   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.815377   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:26.817066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.438289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:23.453792   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:23.453855   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:23.494044   80762 cri.go:89] found id: ""
	I0612 21:40:23.494070   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.494077   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:23.494083   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:23.494144   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:23.533278   80762 cri.go:89] found id: ""
	I0612 21:40:23.533305   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.533313   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:23.533319   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:23.533380   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:23.568504   80762 cri.go:89] found id: ""
	I0612 21:40:23.568538   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.568549   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:23.568556   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:23.568619   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:23.610596   80762 cri.go:89] found id: ""
	I0612 21:40:23.610624   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.610633   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:23.610638   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:23.610690   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:23.651856   80762 cri.go:89] found id: ""
	I0612 21:40:23.651886   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.651896   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:23.651903   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:23.651978   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:23.690989   80762 cri.go:89] found id: ""
	I0612 21:40:23.691020   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.691030   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:23.691036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:23.691089   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:23.730417   80762 cri.go:89] found id: ""
	I0612 21:40:23.730454   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.730467   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:23.730476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:23.730538   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:23.773887   80762 cri.go:89] found id: ""
	I0612 21:40:23.773913   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.773921   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:23.773932   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:23.773947   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:23.825771   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:23.825805   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:23.840136   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:23.840163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:23.933645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:23.933670   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:23.933686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:24.020205   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:24.020243   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.566746   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:26.579557   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:26.579612   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:26.614721   80762 cri.go:89] found id: ""
	I0612 21:40:26.614749   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.614757   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:26.614763   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:26.614815   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:26.651398   80762 cri.go:89] found id: ""
	I0612 21:40:26.651427   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.651437   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:26.651445   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:26.651506   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:26.688217   80762 cri.go:89] found id: ""
	I0612 21:40:26.688249   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.688261   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:26.688268   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:26.688333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:26.721316   80762 cri.go:89] found id: ""
	I0612 21:40:26.721346   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.721357   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:26.721364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:26.721424   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:26.758842   80762 cri.go:89] found id: ""
	I0612 21:40:26.758868   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.758878   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:26.758885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:26.758957   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:26.795696   80762 cri.go:89] found id: ""
	I0612 21:40:26.795725   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.795733   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:26.795738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:26.795788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:26.834903   80762 cri.go:89] found id: ""
	I0612 21:40:26.834932   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.834941   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:26.834947   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:26.835020   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:26.872751   80762 cri.go:89] found id: ""
	I0612 21:40:26.872788   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.872796   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:26.872805   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:26.872817   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:26.952401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:26.952440   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.990548   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:26.990583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:27.042973   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:27.043029   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:27.058348   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:27.058379   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:27.133047   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:26.666430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.165063   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:25.513982   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:28.012556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:30.017664   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.315668   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:31.316817   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.634105   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:29.654113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:29.654171   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:29.700138   80762 cri.go:89] found id: ""
	I0612 21:40:29.700169   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.700179   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:29.700188   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:29.700260   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:29.751599   80762 cri.go:89] found id: ""
	I0612 21:40:29.751628   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.751638   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:29.751646   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:29.751699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:29.801971   80762 cri.go:89] found id: ""
	I0612 21:40:29.801995   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.802003   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:29.802008   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:29.802059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:29.839381   80762 cri.go:89] found id: ""
	I0612 21:40:29.839407   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.839418   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:29.839426   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:29.839484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:29.876634   80762 cri.go:89] found id: ""
	I0612 21:40:29.876661   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.876668   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:29.876675   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:29.876721   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:29.909673   80762 cri.go:89] found id: ""
	I0612 21:40:29.909707   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.909718   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:29.909726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:29.909791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:29.947984   80762 cri.go:89] found id: ""
	I0612 21:40:29.948019   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.948029   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:29.948037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:29.948099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:29.988611   80762 cri.go:89] found id: ""
	I0612 21:40:29.988639   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.988650   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:29.988660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:29.988675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:30.073180   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:30.073216   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:30.114703   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:30.114732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:30.173242   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:30.173278   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:30.189081   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:30.189112   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:30.263564   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:32.763967   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:32.776738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:32.776808   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:32.813088   80762 cri.go:89] found id: ""
	I0612 21:40:32.813115   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.813125   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:32.813132   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:32.813195   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:32.850960   80762 cri.go:89] found id: ""
	I0612 21:40:32.850987   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.850996   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:32.851004   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:32.851065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:31.166578   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.669302   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.512480   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:34.512817   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.815867   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:35.817105   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.887229   80762 cri.go:89] found id: ""
	I0612 21:40:32.887259   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.887270   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:32.887277   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:32.887346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:32.923123   80762 cri.go:89] found id: ""
	I0612 21:40:32.923148   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.923158   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:32.923164   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:32.923242   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:32.962603   80762 cri.go:89] found id: ""
	I0612 21:40:32.962628   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.962638   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:32.962644   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:32.962695   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:32.998971   80762 cri.go:89] found id: ""
	I0612 21:40:32.999025   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.999037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:32.999046   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:32.999120   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:33.037640   80762 cri.go:89] found id: ""
	I0612 21:40:33.037670   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.037680   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:33.037686   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:33.037748   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:33.073758   80762 cri.go:89] found id: ""
	I0612 21:40:33.073787   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.073794   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:33.073804   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:33.073815   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:33.124478   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:33.124512   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:33.139010   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:33.139036   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:33.207693   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:33.207716   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:33.207732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:33.287710   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:33.287746   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:35.831654   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:35.845783   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:35.845845   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:35.882097   80762 cri.go:89] found id: ""
	I0612 21:40:35.882129   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.882141   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:35.882149   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:35.882205   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:35.920931   80762 cri.go:89] found id: ""
	I0612 21:40:35.920972   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.920980   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:35.920985   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:35.921061   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:35.958689   80762 cri.go:89] found id: ""
	I0612 21:40:35.958712   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.958721   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:35.958726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:35.958774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:35.994973   80762 cri.go:89] found id: ""
	I0612 21:40:35.995028   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.995040   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:35.995048   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:35.995114   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:36.035679   80762 cri.go:89] found id: ""
	I0612 21:40:36.035707   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.035715   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:36.035721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:36.035768   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:36.071498   80762 cri.go:89] found id: ""
	I0612 21:40:36.071525   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.071534   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:36.071544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:36.071594   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:36.107367   80762 cri.go:89] found id: ""
	I0612 21:40:36.107397   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.107406   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:36.107413   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:36.107466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:36.148668   80762 cri.go:89] found id: ""
	I0612 21:40:36.148699   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.148710   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:36.148721   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:36.148736   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:36.207719   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:36.207765   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:36.223129   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:36.223158   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:36.290786   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:36.290809   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:36.290822   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:36.375361   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:36.375398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:36.165430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.165989   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:37.015936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:39.513497   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.318886   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:40.815802   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.921100   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:38.935420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:38.935491   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:38.970519   80762 cri.go:89] found id: ""
	I0612 21:40:38.970548   80762 logs.go:276] 0 containers: []
	W0612 21:40:38.970559   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:38.970567   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:38.970639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:39.005866   80762 cri.go:89] found id: ""
	I0612 21:40:39.005888   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.005896   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:39.005902   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:39.005954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:39.043619   80762 cri.go:89] found id: ""
	I0612 21:40:39.043647   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.043655   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:39.043661   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:39.043709   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:39.081311   80762 cri.go:89] found id: ""
	I0612 21:40:39.081336   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.081344   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:39.081350   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:39.081410   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:39.117326   80762 cri.go:89] found id: ""
	I0612 21:40:39.117358   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.117367   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:39.117372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:39.117423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:39.151785   80762 cri.go:89] found id: ""
	I0612 21:40:39.151819   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.151828   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:39.151835   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:39.151899   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:39.187031   80762 cri.go:89] found id: ""
	I0612 21:40:39.187057   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.187065   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:39.187071   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:39.187119   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:39.222186   80762 cri.go:89] found id: ""
	I0612 21:40:39.222212   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.222223   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:39.222233   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:39.222245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:39.276126   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:39.276164   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:39.291631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:39.291658   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:39.365615   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:39.365641   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:39.365659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:39.442548   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:39.442600   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:41.980840   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:41.996629   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:41.996686   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:42.034158   80762 cri.go:89] found id: ""
	I0612 21:40:42.034186   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.034195   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:42.034202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:42.034274   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:42.070981   80762 cri.go:89] found id: ""
	I0612 21:40:42.071011   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.071021   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:42.071028   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:42.071093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:42.108282   80762 cri.go:89] found id: ""
	I0612 21:40:42.108309   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.108316   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:42.108322   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:42.108369   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:42.146394   80762 cri.go:89] found id: ""
	I0612 21:40:42.146423   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.146434   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:42.146454   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:42.146539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:42.183577   80762 cri.go:89] found id: ""
	I0612 21:40:42.183601   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.183608   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:42.183614   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:42.183662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:42.222069   80762 cri.go:89] found id: ""
	I0612 21:40:42.222100   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.222109   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:42.222115   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:42.222168   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:42.259128   80762 cri.go:89] found id: ""
	I0612 21:40:42.259155   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.259164   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:42.259192   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:42.259282   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:42.296321   80762 cri.go:89] found id: ""
	I0612 21:40:42.296354   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.296368   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:42.296380   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:42.296400   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:42.311098   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:42.311137   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:42.386116   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:42.386144   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:42.386163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:42.467016   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:42.467054   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:42.509143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:42.509180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:40.166288   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.664817   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.665596   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.017043   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.513368   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.816702   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.316890   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.062872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:45.076570   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:45.076658   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:45.114362   80762 cri.go:89] found id: ""
	I0612 21:40:45.114394   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.114404   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:45.114412   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:45.114478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:45.151577   80762 cri.go:89] found id: ""
	I0612 21:40:45.151609   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.151620   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:45.151627   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:45.151689   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:45.188753   80762 cri.go:89] found id: ""
	I0612 21:40:45.188785   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.188795   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:45.188802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:45.188861   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:45.224775   80762 cri.go:89] found id: ""
	I0612 21:40:45.224801   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.224808   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:45.224814   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:45.224873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:45.260440   80762 cri.go:89] found id: ""
	I0612 21:40:45.260472   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.260483   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:45.260490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:45.260547   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:45.297662   80762 cri.go:89] found id: ""
	I0612 21:40:45.297697   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.297709   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:45.297716   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:45.297774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:45.335637   80762 cri.go:89] found id: ""
	I0612 21:40:45.335669   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.335682   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:45.335690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:45.335753   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:45.371523   80762 cri.go:89] found id: ""
	I0612 21:40:45.371580   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.371590   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:45.371599   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:45.371610   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:45.424029   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:45.424065   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:45.440339   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:45.440378   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:45.509504   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:45.509526   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:45.509541   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:45.591857   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:45.591893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:47.166437   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.665544   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.016561   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.511894   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.320090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.816816   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:48.135912   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:48.151271   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:48.151331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:48.192740   80762 cri.go:89] found id: ""
	I0612 21:40:48.192775   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.192788   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:48.192798   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:48.192875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:48.230440   80762 cri.go:89] found id: ""
	I0612 21:40:48.230469   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.230479   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:48.230487   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:48.230549   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:48.270892   80762 cri.go:89] found id: ""
	I0612 21:40:48.270922   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.270933   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:48.270941   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:48.270996   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:48.308555   80762 cri.go:89] found id: ""
	I0612 21:40:48.308580   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.308588   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:48.308594   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:48.308640   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:48.342705   80762 cri.go:89] found id: ""
	I0612 21:40:48.342727   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.342735   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:48.342741   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:48.342788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:48.377418   80762 cri.go:89] found id: ""
	I0612 21:40:48.377450   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.377461   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:48.377468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:48.377535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:48.413092   80762 cri.go:89] found id: ""
	I0612 21:40:48.413126   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.413141   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:48.413149   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:48.413215   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:48.447673   80762 cri.go:89] found id: ""
	I0612 21:40:48.447699   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.447708   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:48.447716   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:48.447728   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:48.488508   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:48.488542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:48.540573   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:48.540608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:48.554735   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:48.554762   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:48.632074   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:48.632098   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:48.632117   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.212336   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:51.227428   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:51.227493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:51.268124   80762 cri.go:89] found id: ""
	I0612 21:40:51.268157   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.268167   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:51.268172   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:51.268220   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:51.305751   80762 cri.go:89] found id: ""
	I0612 21:40:51.305777   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.305785   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:51.305793   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:51.305849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:51.347292   80762 cri.go:89] found id: ""
	I0612 21:40:51.347318   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.347325   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:51.347332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:51.347394   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:51.387476   80762 cri.go:89] found id: ""
	I0612 21:40:51.387501   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.387509   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:51.387515   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:51.387573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:51.431992   80762 cri.go:89] found id: ""
	I0612 21:40:51.432019   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.432029   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:51.432036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:51.432096   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:51.477204   80762 cri.go:89] found id: ""
	I0612 21:40:51.477235   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.477246   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:51.477254   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:51.477346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:51.518449   80762 cri.go:89] found id: ""
	I0612 21:40:51.518477   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.518488   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:51.518502   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:51.518562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:51.554991   80762 cri.go:89] found id: ""
	I0612 21:40:51.555015   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.555024   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:51.555033   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:51.555046   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:51.606732   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:51.606769   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:51.620512   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:51.620538   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:51.697029   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:51.697058   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:51.697074   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.775401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:51.775437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:51.666561   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.166247   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:51.512909   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.012887   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:52.315904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.316764   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.816819   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.318059   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:54.331420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:54.331509   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:54.367886   80762 cri.go:89] found id: ""
	I0612 21:40:54.367926   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.367948   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:54.367959   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:54.368047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:54.403998   80762 cri.go:89] found id: ""
	I0612 21:40:54.404023   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.404034   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:54.404041   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:54.404108   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:54.441449   80762 cri.go:89] found id: ""
	I0612 21:40:54.441480   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.441491   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:54.441498   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:54.441557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:54.476459   80762 cri.go:89] found id: ""
	I0612 21:40:54.476490   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.476500   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:54.476508   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:54.476573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:54.515337   80762 cri.go:89] found id: ""
	I0612 21:40:54.515360   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.515368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:54.515374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:54.515423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:54.551447   80762 cri.go:89] found id: ""
	I0612 21:40:54.551468   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.551475   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:54.551481   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:54.551528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:54.587082   80762 cri.go:89] found id: ""
	I0612 21:40:54.587114   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.587125   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:54.587145   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:54.587225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:54.624211   80762 cri.go:89] found id: ""
	I0612 21:40:54.624235   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.624257   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:54.624268   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:54.624282   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:54.677816   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:54.677848   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:54.693725   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:54.693749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:54.772229   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:54.772255   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:54.772273   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:54.852543   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:54.852578   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:57.397722   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:57.411082   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:57.411145   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:57.449633   80762 cri.go:89] found id: ""
	I0612 21:40:57.449662   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.449673   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:57.449680   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:57.449745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:57.489855   80762 cri.go:89] found id: ""
	I0612 21:40:57.489880   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.489889   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:57.489894   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:57.489952   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:57.528986   80762 cri.go:89] found id: ""
	I0612 21:40:57.529006   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.529014   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:57.529019   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:57.529081   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:57.566701   80762 cri.go:89] found id: ""
	I0612 21:40:57.566730   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.566739   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:57.566746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:57.566800   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:57.601114   80762 cri.go:89] found id: ""
	I0612 21:40:57.601137   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.601145   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:57.601151   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:57.601212   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:57.636120   80762 cri.go:89] found id: ""
	I0612 21:40:57.636145   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.636155   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:57.636163   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:57.636225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:57.676912   80762 cri.go:89] found id: ""
	I0612 21:40:57.676953   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.676960   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:57.676966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:57.677039   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:57.714671   80762 cri.go:89] found id: ""
	I0612 21:40:57.714691   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.714699   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:57.714707   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:57.714720   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:57.770550   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:57.770583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:57.785062   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:57.785093   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:57.853448   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:57.853468   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:57.853480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:56.167768   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.665108   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.014274   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.014535   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.816961   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.817450   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:57.939957   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:57.939999   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.493469   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:00.509746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:00.509819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:00.546582   80762 cri.go:89] found id: ""
	I0612 21:41:00.546610   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.546620   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:00.546629   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:00.546683   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:00.584229   80762 cri.go:89] found id: ""
	I0612 21:41:00.584256   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.584264   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:00.584269   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:00.584337   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:00.618679   80762 cri.go:89] found id: ""
	I0612 21:41:00.618704   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.618712   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:00.618719   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:00.618778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:00.656336   80762 cri.go:89] found id: ""
	I0612 21:41:00.656364   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.656375   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:00.656384   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:00.656457   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:00.694147   80762 cri.go:89] found id: ""
	I0612 21:41:00.694173   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.694182   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:00.694187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:00.694236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:00.733964   80762 cri.go:89] found id: ""
	I0612 21:41:00.733994   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.734006   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:00.734014   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:00.734076   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:00.771245   80762 cri.go:89] found id: ""
	I0612 21:41:00.771274   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.771287   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:00.771293   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:00.771357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:00.809118   80762 cri.go:89] found id: ""
	I0612 21:41:00.809150   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.809158   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:00.809168   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:00.809188   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:00.863479   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:00.863514   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:00.878749   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:00.878783   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:00.955800   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:00.955825   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:00.955844   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:01.037587   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:01.037618   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.666373   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.165203   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.513805   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.017922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.317115   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.817907   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.583063   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:03.597656   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:03.597732   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:03.633233   80762 cri.go:89] found id: ""
	I0612 21:41:03.633263   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.633283   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:03.633291   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:03.633357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:03.679900   80762 cri.go:89] found id: ""
	I0612 21:41:03.679930   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.679941   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:03.679948   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:03.680018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:03.718766   80762 cri.go:89] found id: ""
	I0612 21:41:03.718792   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.718800   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:03.718811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:03.718868   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:03.759404   80762 cri.go:89] found id: ""
	I0612 21:41:03.759429   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.759437   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:03.759443   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:03.759496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:03.794313   80762 cri.go:89] found id: ""
	I0612 21:41:03.794348   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.794357   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:03.794364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:03.794430   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:03.832525   80762 cri.go:89] found id: ""
	I0612 21:41:03.832546   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.832554   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:03.832559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:03.832607   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:03.872841   80762 cri.go:89] found id: ""
	I0612 21:41:03.872868   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.872878   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:03.872885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:03.872945   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:03.912736   80762 cri.go:89] found id: ""
	I0612 21:41:03.912760   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.912770   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:03.912781   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:03.912794   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:03.986645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:03.986672   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:03.986688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:04.066766   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:04.066799   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:04.108219   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:04.108250   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:04.168866   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:04.168911   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:06.684232   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:06.698359   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:06.698443   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:06.735324   80762 cri.go:89] found id: ""
	I0612 21:41:06.735350   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.735359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:06.735364   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:06.735418   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:06.771763   80762 cri.go:89] found id: ""
	I0612 21:41:06.771786   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.771794   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:06.771799   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:06.771850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:06.808151   80762 cri.go:89] found id: ""
	I0612 21:41:06.808179   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.808188   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:06.808193   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:06.808263   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:06.846099   80762 cri.go:89] found id: ""
	I0612 21:41:06.846121   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.846129   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:06.846134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:06.846182   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:06.883559   80762 cri.go:89] found id: ""
	I0612 21:41:06.883584   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.883591   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:06.883597   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:06.883645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:06.920799   80762 cri.go:89] found id: ""
	I0612 21:41:06.920830   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.920841   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:06.920849   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:06.920914   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:06.964441   80762 cri.go:89] found id: ""
	I0612 21:41:06.964472   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.964482   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:06.964490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:06.964561   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:07.000866   80762 cri.go:89] found id: ""
	I0612 21:41:07.000901   80762 logs.go:276] 0 containers: []
	W0612 21:41:07.000912   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:07.000924   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:07.000993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:07.017074   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:07.017121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:07.093873   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:07.093901   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:07.093925   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:07.171258   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:07.171293   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:07.212588   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:07.212624   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:05.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.665354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.665558   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.512109   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.512615   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.513483   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:08.316327   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:10.316456   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.767332   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:09.781184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:09.781249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:09.818966   80762 cri.go:89] found id: ""
	I0612 21:41:09.818999   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.819008   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:09.819014   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:09.819064   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:09.854714   80762 cri.go:89] found id: ""
	I0612 21:41:09.854742   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.854760   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:09.854772   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:09.854823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:09.891229   80762 cri.go:89] found id: ""
	I0612 21:41:09.891257   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.891268   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:09.891274   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:09.891335   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:09.928569   80762 cri.go:89] found id: ""
	I0612 21:41:09.928598   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.928610   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:09.928617   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:09.928680   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:09.963681   80762 cri.go:89] found id: ""
	I0612 21:41:09.963714   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.963725   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:09.963733   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:09.963819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:10.002340   80762 cri.go:89] found id: ""
	I0612 21:41:10.002368   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.002380   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:10.002388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:10.002454   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:10.041935   80762 cri.go:89] found id: ""
	I0612 21:41:10.041961   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.041972   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:10.041979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:10.042047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:10.080541   80762 cri.go:89] found id: ""
	I0612 21:41:10.080571   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.080584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:10.080598   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:10.080614   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:10.140904   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:10.140944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:10.176646   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:10.176688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:10.272147   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:10.272169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:10.272183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:10.352649   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:10.352686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:12.166618   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.665896   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.013177   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.013716   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.317177   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.317391   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.815940   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.896274   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:12.911147   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:12.911231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:12.947628   80762 cri.go:89] found id: ""
	I0612 21:41:12.947651   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.947660   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:12.947665   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:12.947726   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:12.982813   80762 cri.go:89] found id: ""
	I0612 21:41:12.982837   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.982845   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:12.982851   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:12.982898   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:13.021360   80762 cri.go:89] found id: ""
	I0612 21:41:13.021403   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.021412   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:13.021417   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:13.021468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:13.063534   80762 cri.go:89] found id: ""
	I0612 21:41:13.063566   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.063576   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:13.063585   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:13.063666   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:13.098767   80762 cri.go:89] found id: ""
	I0612 21:41:13.098796   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.098807   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:13.098816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:13.098878   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:13.140764   80762 cri.go:89] found id: ""
	I0612 21:41:13.140797   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.140809   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:13.140816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:13.140882   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:13.180356   80762 cri.go:89] found id: ""
	I0612 21:41:13.180400   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.180413   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:13.180420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:13.180482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:13.215198   80762 cri.go:89] found id: ""
	I0612 21:41:13.215227   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.215238   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:13.215249   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:13.215265   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:13.273143   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:13.273182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:13.287861   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:13.287893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:13.366052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:13.366073   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:13.366099   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:13.450980   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:13.451015   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:15.991386   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:16.005618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:16.005699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:16.047253   80762 cri.go:89] found id: ""
	I0612 21:41:16.047281   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.047289   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:16.047295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:16.047356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:16.082860   80762 cri.go:89] found id: ""
	I0612 21:41:16.082886   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.082894   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:16.082899   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:16.082948   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:16.123127   80762 cri.go:89] found id: ""
	I0612 21:41:16.123152   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.123164   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:16.123187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:16.123247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:16.167155   80762 cri.go:89] found id: ""
	I0612 21:41:16.167189   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.167199   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:16.167207   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:16.167276   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:16.204036   80762 cri.go:89] found id: ""
	I0612 21:41:16.204061   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.204071   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:16.204079   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:16.204140   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:16.246672   80762 cri.go:89] found id: ""
	I0612 21:41:16.246701   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.246712   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:16.246721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:16.246785   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:16.286820   80762 cri.go:89] found id: ""
	I0612 21:41:16.286849   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.286857   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:16.286864   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:16.286919   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:16.326622   80762 cri.go:89] found id: ""
	I0612 21:41:16.326649   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.326660   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:16.326667   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:16.326678   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:16.407492   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:16.407525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:16.448207   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:16.448236   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:16.501675   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:16.501714   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:16.518173   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:16.518202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:16.592582   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:17.166163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.167204   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.514405   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.016197   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:18.816596   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:20.817504   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.093054   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:19.107926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:19.108002   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:19.149386   80762 cri.go:89] found id: ""
	I0612 21:41:19.149411   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.149421   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:19.149429   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:19.149493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:19.188092   80762 cri.go:89] found id: ""
	I0612 21:41:19.188120   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.188131   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:19.188139   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:19.188201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:19.227203   80762 cri.go:89] found id: ""
	I0612 21:41:19.227229   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.227235   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:19.227242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:19.227301   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:19.269187   80762 cri.go:89] found id: ""
	I0612 21:41:19.269217   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.269225   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:19.269232   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:19.269294   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:19.305394   80762 cri.go:89] found id: ""
	I0612 21:41:19.305418   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.305425   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:19.305431   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:19.305480   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:19.347794   80762 cri.go:89] found id: ""
	I0612 21:41:19.347825   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.347837   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:19.347846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:19.347907   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:19.384314   80762 cri.go:89] found id: ""
	I0612 21:41:19.384346   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.384364   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:19.384372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:19.384428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:19.421782   80762 cri.go:89] found id: ""
	I0612 21:41:19.421811   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.421822   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:19.421834   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:19.421849   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:19.475969   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:19.476000   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:19.490683   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:19.490710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:19.574492   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:19.574513   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:19.574525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:19.662288   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:19.662324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:22.205404   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:22.220217   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:22.220297   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:22.256870   80762 cri.go:89] found id: ""
	I0612 21:41:22.256901   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.256913   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:22.256921   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:22.256984   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:22.290380   80762 cri.go:89] found id: ""
	I0612 21:41:22.290413   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.290425   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:22.290433   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:22.290497   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:22.324981   80762 cri.go:89] found id: ""
	I0612 21:41:22.325010   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.325019   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:22.325024   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:22.325093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:22.362900   80762 cri.go:89] found id: ""
	I0612 21:41:22.362926   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.362938   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:22.362946   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:22.363008   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:22.399004   80762 cri.go:89] found id: ""
	I0612 21:41:22.399037   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.399048   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:22.399056   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:22.399116   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:22.434306   80762 cri.go:89] found id: ""
	I0612 21:41:22.434341   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.434355   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:22.434365   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:22.434422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:22.479085   80762 cri.go:89] found id: ""
	I0612 21:41:22.479116   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.479129   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:22.479142   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:22.479228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:22.516730   80762 cri.go:89] found id: ""
	I0612 21:41:22.516755   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.516761   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:22.516769   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:22.516780   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:22.570921   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:22.570957   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:22.585409   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:22.585437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:22.667314   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:22.667342   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:22.667360   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:22.743865   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:22.743901   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:21.170060   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.666364   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:21.021658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.512541   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.316232   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.816641   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.282306   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:25.297334   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:25.297407   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:25.336610   80762 cri.go:89] found id: ""
	I0612 21:41:25.336641   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.336654   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:25.336662   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:25.336729   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:25.373307   80762 cri.go:89] found id: ""
	I0612 21:41:25.373338   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.373350   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:25.373358   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:25.373425   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:25.413141   80762 cri.go:89] found id: ""
	I0612 21:41:25.413169   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.413177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:25.413183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:25.413233   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:25.450810   80762 cri.go:89] found id: ""
	I0612 21:41:25.450842   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.450853   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:25.450862   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:25.450924   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:25.487017   80762 cri.go:89] found id: ""
	I0612 21:41:25.487049   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.487255   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:25.487269   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:25.487328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:25.524335   80762 cri.go:89] found id: ""
	I0612 21:41:25.524361   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.524371   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:25.524377   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:25.524428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:25.560394   80762 cri.go:89] found id: ""
	I0612 21:41:25.560421   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.560429   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:25.560435   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:25.560482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:25.599334   80762 cri.go:89] found id: ""
	I0612 21:41:25.599362   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.599372   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:25.599384   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:25.599399   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:25.680153   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:25.680195   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:25.726336   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:25.726377   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:25.777064   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:25.777098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:25.791978   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:25.792007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:25.868860   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:25.667028   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.164920   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.514249   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.012042   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.013658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.316543   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.816789   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.369099   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:28.382729   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:28.382786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:28.423835   80762 cri.go:89] found id: ""
	I0612 21:41:28.423865   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.423875   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:28.423889   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:28.423953   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:28.463098   80762 cri.go:89] found id: ""
	I0612 21:41:28.463127   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.463137   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:28.463144   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:28.463223   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:28.499678   80762 cri.go:89] found id: ""
	I0612 21:41:28.499707   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.499718   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:28.499726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:28.499786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:28.536057   80762 cri.go:89] found id: ""
	I0612 21:41:28.536089   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.536101   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:28.536108   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:28.536180   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:28.571052   80762 cri.go:89] found id: ""
	I0612 21:41:28.571080   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.571090   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:28.571098   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:28.571162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:28.607320   80762 cri.go:89] found id: ""
	I0612 21:41:28.607348   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.607360   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:28.607368   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:28.607427   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:28.643000   80762 cri.go:89] found id: ""
	I0612 21:41:28.643029   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.643037   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:28.643042   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:28.643113   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:28.684134   80762 cri.go:89] found id: ""
	I0612 21:41:28.684164   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.684175   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:28.684186   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:28.684201   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:28.737059   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:28.737092   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:28.753290   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:28.753320   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:28.826964   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:28.826990   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:28.827009   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:28.908874   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:28.908919   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:31.450362   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:31.465831   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:31.465912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:31.507441   80762 cri.go:89] found id: ""
	I0612 21:41:31.507465   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.507474   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:31.507482   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:31.507546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:31.541403   80762 cri.go:89] found id: ""
	I0612 21:41:31.541437   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.541450   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:31.541458   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:31.541524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:31.576367   80762 cri.go:89] found id: ""
	I0612 21:41:31.576393   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.576405   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:31.576412   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:31.576475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:31.615053   80762 cri.go:89] found id: ""
	I0612 21:41:31.615081   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.615091   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:31.615099   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:31.615159   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:31.650460   80762 cri.go:89] found id: ""
	I0612 21:41:31.650495   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.650504   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:31.650511   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:31.650580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:31.690764   80762 cri.go:89] found id: ""
	I0612 21:41:31.690792   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.690803   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:31.690810   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:31.690870   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:31.729785   80762 cri.go:89] found id: ""
	I0612 21:41:31.729809   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.729817   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:31.729822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:31.729881   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:31.772978   80762 cri.go:89] found id: ""
	I0612 21:41:31.773005   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.773013   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:31.773023   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:31.773038   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:31.830451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:31.830484   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:31.846821   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:31.846850   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:31.927289   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:31.927328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:31.927358   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:32.004814   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:32.004852   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:30.165423   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.165695   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.664959   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.515104   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:33.316674   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:35.816686   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.550931   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:34.567559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:34.567618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:34.602234   80762 cri.go:89] found id: ""
	I0612 21:41:34.602260   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.602267   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:34.602273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:34.602318   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:34.639575   80762 cri.go:89] found id: ""
	I0612 21:41:34.639598   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.639605   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:34.639610   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:34.639656   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:34.681325   80762 cri.go:89] found id: ""
	I0612 21:41:34.681360   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.681368   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:34.681374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:34.681478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:34.721405   80762 cri.go:89] found id: ""
	I0612 21:41:34.721432   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.721444   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:34.721451   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:34.721517   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:34.764344   80762 cri.go:89] found id: ""
	I0612 21:41:34.764375   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.764386   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:34.764394   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:34.764459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:34.802083   80762 cri.go:89] found id: ""
	I0612 21:41:34.802107   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.802115   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:34.802121   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:34.802181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:34.843418   80762 cri.go:89] found id: ""
	I0612 21:41:34.843441   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.843450   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:34.843455   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:34.843501   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:34.877803   80762 cri.go:89] found id: ""
	I0612 21:41:34.877832   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.877842   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:34.877852   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:34.877867   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:34.930515   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:34.930545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:34.943705   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:34.943729   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:35.024912   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:35.024931   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:35.024941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:35.109129   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:35.109165   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:37.651788   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:37.667901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:37.667964   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:37.709599   80762 cri.go:89] found id: ""
	I0612 21:41:37.709627   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.709637   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:37.709645   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:37.709700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:37.747150   80762 cri.go:89] found id: ""
	I0612 21:41:37.747191   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.747204   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:37.747212   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:37.747273   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:37.785528   80762 cri.go:89] found id: ""
	I0612 21:41:37.785552   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.785560   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:37.785567   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:37.785614   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:37.822363   80762 cri.go:89] found id: ""
	I0612 21:41:37.822390   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.822400   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:37.822408   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:37.822468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:36.666054   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.165222   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.012397   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.012503   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:38.317132   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:40.821114   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.858285   80762 cri.go:89] found id: ""
	I0612 21:41:37.858395   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.858409   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:37.858416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:37.858466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:37.897500   80762 cri.go:89] found id: ""
	I0612 21:41:37.897542   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.897556   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:37.897574   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:37.897635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:37.937878   80762 cri.go:89] found id: ""
	I0612 21:41:37.937905   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.937921   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:37.937927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:37.937985   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:37.978282   80762 cri.go:89] found id: ""
	I0612 21:41:37.978310   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.978319   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:37.978327   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:37.978341   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:38.055864   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:38.055890   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:38.055903   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:38.135883   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:38.135918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:38.178641   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:38.178668   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:38.236635   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:38.236686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:40.759426   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:40.773526   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:40.773598   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:40.819130   80762 cri.go:89] found id: ""
	I0612 21:41:40.819161   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.819190   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:40.819202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:40.819264   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:40.856176   80762 cri.go:89] found id: ""
	I0612 21:41:40.856204   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.856216   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:40.856224   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:40.856287   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:40.896709   80762 cri.go:89] found id: ""
	I0612 21:41:40.896739   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.896750   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:40.896759   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:40.896820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:40.936431   80762 cri.go:89] found id: ""
	I0612 21:41:40.936457   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.936465   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:40.936471   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:40.936528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:40.979773   80762 cri.go:89] found id: ""
	I0612 21:41:40.979809   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.979820   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:40.979828   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:40.979892   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:41.023885   80762 cri.go:89] found id: ""
	I0612 21:41:41.023910   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.023919   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:41.023925   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:41.024004   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:41.070370   80762 cri.go:89] found id: ""
	I0612 21:41:41.070396   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.070405   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:41.070411   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:41.070467   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:41.115282   80762 cri.go:89] found id: ""
	I0612 21:41:41.115311   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.115321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:41.115332   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:41.115346   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:41.128680   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:41.128710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:41.206100   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:41.206125   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:41.206140   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:41.283499   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:41.283536   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:41.323275   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:41.323307   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:41.166258   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.666600   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:41.013379   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.316659   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.816066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.875750   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:43.890156   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:43.890216   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:43.935105   80762 cri.go:89] found id: ""
	I0612 21:41:43.935135   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.935147   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:43.935155   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:43.935236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:43.980929   80762 cri.go:89] found id: ""
	I0612 21:41:43.980958   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.980967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:43.980973   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:43.981051   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:44.029387   80762 cri.go:89] found id: ""
	I0612 21:41:44.029409   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.029416   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:44.029421   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:44.029483   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:44.067415   80762 cri.go:89] found id: ""
	I0612 21:41:44.067449   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.067460   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:44.067468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:44.067528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:44.105093   80762 cri.go:89] found id: ""
	I0612 21:41:44.105117   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.105125   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:44.105131   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:44.105178   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:44.142647   80762 cri.go:89] found id: ""
	I0612 21:41:44.142680   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.142691   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:44.142699   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:44.142759   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:44.182725   80762 cri.go:89] found id: ""
	I0612 21:41:44.182756   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.182767   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:44.182775   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:44.182836   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:44.219538   80762 cri.go:89] found id: ""
	I0612 21:41:44.219568   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.219580   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:44.219593   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:44.219608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:44.272234   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:44.272267   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:44.285631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:44.285663   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:44.362453   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:44.362470   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:44.362482   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:44.444624   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:44.444656   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.985731   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:46.999749   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:46.999819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:47.035051   80762 cri.go:89] found id: ""
	I0612 21:41:47.035073   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.035080   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:47.035086   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:47.035136   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:47.077929   80762 cri.go:89] found id: ""
	I0612 21:41:47.077964   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.077975   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:47.077982   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:47.078062   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:47.111621   80762 cri.go:89] found id: ""
	I0612 21:41:47.111660   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.111671   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:47.111679   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:47.111744   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:47.150700   80762 cri.go:89] found id: ""
	I0612 21:41:47.150725   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.150733   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:47.150739   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:47.150787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:47.189547   80762 cri.go:89] found id: ""
	I0612 21:41:47.189576   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.189586   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:47.189593   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:47.189660   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:47.229482   80762 cri.go:89] found id: ""
	I0612 21:41:47.229510   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.229522   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:47.229530   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:47.229599   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:47.266798   80762 cri.go:89] found id: ""
	I0612 21:41:47.266826   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.266837   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:47.266844   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:47.266906   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:47.302256   80762 cri.go:89] found id: ""
	I0612 21:41:47.302280   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.302287   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:47.302295   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:47.302306   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:47.354485   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:47.354526   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:47.368689   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:47.368713   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:47.438219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:47.438244   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:47.438257   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:47.514199   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:47.514227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.165541   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:48.664957   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.512922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.012630   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.817136   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.317083   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.056394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:50.069348   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:50.069482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:50.106057   80762 cri.go:89] found id: ""
	I0612 21:41:50.106087   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.106097   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:50.106104   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:50.106162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:50.144532   80762 cri.go:89] found id: ""
	I0612 21:41:50.144560   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.144571   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:50.144579   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:50.144662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:50.184549   80762 cri.go:89] found id: ""
	I0612 21:41:50.184575   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.184583   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:50.184588   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:50.184648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:50.228520   80762 cri.go:89] found id: ""
	I0612 21:41:50.228555   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.228569   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:50.228578   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:50.228649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:50.265697   80762 cri.go:89] found id: ""
	I0612 21:41:50.265726   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.265737   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:50.265744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:50.265818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:50.301353   80762 cri.go:89] found id: ""
	I0612 21:41:50.301393   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.301410   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:50.301416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:50.301477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:50.337273   80762 cri.go:89] found id: ""
	I0612 21:41:50.337298   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.337306   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:50.337313   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:50.337374   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:50.383090   80762 cri.go:89] found id: ""
	I0612 21:41:50.383116   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.383126   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:50.383138   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:50.383151   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:50.454193   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:50.454240   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:50.477753   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:50.477779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:50.544052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:50.544075   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:50.544091   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:50.626441   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:50.626480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:50.666068   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.666287   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.013142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.512869   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.318942   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.816918   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.818011   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:53.168599   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:53.181682   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:53.181764   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:53.228060   80762 cri.go:89] found id: ""
	I0612 21:41:53.228096   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.228107   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:53.228115   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:53.228176   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:53.264867   80762 cri.go:89] found id: ""
	I0612 21:41:53.264890   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.264898   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:53.264903   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:53.264950   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:53.298351   80762 cri.go:89] found id: ""
	I0612 21:41:53.298378   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.298386   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:53.298392   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:53.298448   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:53.335888   80762 cri.go:89] found id: ""
	I0612 21:41:53.335910   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.335917   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:53.335922   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:53.335980   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:53.376131   80762 cri.go:89] found id: ""
	I0612 21:41:53.376166   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.376175   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:53.376183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:53.376240   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:53.412059   80762 cri.go:89] found id: ""
	I0612 21:41:53.412082   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.412088   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:53.412097   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:53.412142   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:53.446776   80762 cri.go:89] found id: ""
	I0612 21:41:53.446805   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.446816   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:53.446823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:53.446894   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:53.482411   80762 cri.go:89] found id: ""
	I0612 21:41:53.482433   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.482441   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:53.482449   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:53.482460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:53.522419   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:53.522448   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:53.573107   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:53.573141   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:53.587101   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:53.587147   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:53.665631   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:53.665660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:53.665675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.242482   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:56.255606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:56.255682   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:56.290837   80762 cri.go:89] found id: ""
	I0612 21:41:56.290861   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.290872   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:56.290880   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:56.290938   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:56.325429   80762 cri.go:89] found id: ""
	I0612 21:41:56.325458   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.325466   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:56.325471   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:56.325534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:56.359809   80762 cri.go:89] found id: ""
	I0612 21:41:56.359835   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.359845   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:56.359852   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:56.359912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:56.397775   80762 cri.go:89] found id: ""
	I0612 21:41:56.397803   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.397815   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:56.397823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:56.397884   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:56.433917   80762 cri.go:89] found id: ""
	I0612 21:41:56.433945   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.433956   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:56.433963   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:56.434028   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:56.467390   80762 cri.go:89] found id: ""
	I0612 21:41:56.467419   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.467429   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:56.467438   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:56.467496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:56.504014   80762 cri.go:89] found id: ""
	I0612 21:41:56.504048   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.504059   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:56.504067   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:56.504138   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:56.544157   80762 cri.go:89] found id: ""
	I0612 21:41:56.544187   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.544198   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:56.544209   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:56.544224   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:56.595431   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:56.595462   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:56.608897   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:56.608936   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:56.682706   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:56.682735   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:56.682749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.762598   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:56.762634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:55.166152   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:57.167363   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.666265   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.514832   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:58.515091   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.317285   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.818345   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.302898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:59.317901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:59.317976   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:59.360136   80762 cri.go:89] found id: ""
	I0612 21:41:59.360164   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.360174   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:59.360181   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:59.360249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:59.397205   80762 cri.go:89] found id: ""
	I0612 21:41:59.397233   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.397244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:59.397252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:59.397312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:59.437063   80762 cri.go:89] found id: ""
	I0612 21:41:59.437093   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.437105   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:59.437113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:59.437172   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:59.472800   80762 cri.go:89] found id: ""
	I0612 21:41:59.472827   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.472835   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:59.472843   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:59.472904   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:59.509452   80762 cri.go:89] found id: ""
	I0612 21:41:59.509474   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.509482   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:59.509487   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:59.509534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:59.546121   80762 cri.go:89] found id: ""
	I0612 21:41:59.546151   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.546162   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:59.546170   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:59.546231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:59.582983   80762 cri.go:89] found id: ""
	I0612 21:41:59.583007   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.583014   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:59.583020   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:59.583065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:59.621110   80762 cri.go:89] found id: ""
	I0612 21:41:59.621148   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.621160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:59.621171   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:59.621187   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:59.673113   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:59.673143   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:59.688106   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:59.688171   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:59.767653   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:59.767678   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:59.767692   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:59.848467   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:59.848507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.391324   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:02.406543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:02.406621   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:02.442225   80762 cri.go:89] found id: ""
	I0612 21:42:02.442255   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.442265   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:02.442273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:02.442341   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:02.479445   80762 cri.go:89] found id: ""
	I0612 21:42:02.479476   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.479487   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:02.479495   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:02.479557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:02.517654   80762 cri.go:89] found id: ""
	I0612 21:42:02.517685   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.517697   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:02.517705   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:02.517775   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:02.562743   80762 cri.go:89] found id: ""
	I0612 21:42:02.562777   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.562788   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:02.562807   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:02.562873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:02.597775   80762 cri.go:89] found id: ""
	I0612 21:42:02.597805   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.597816   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:02.597824   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:02.597886   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:02.633869   80762 cri.go:89] found id: ""
	I0612 21:42:02.633901   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.633913   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:02.633921   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:02.633979   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:02.671931   80762 cri.go:89] found id: ""
	I0612 21:42:02.671962   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.671974   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:02.671982   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:02.672044   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:02.709162   80762 cri.go:89] found id: ""
	I0612 21:42:02.709192   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.709204   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:02.709214   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:02.709228   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:02.722937   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:02.722967   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:02.798249   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:02.798275   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:02.798292   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:02.165664   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.012458   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:03.513414   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.317221   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.818062   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:02.876339   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:02.876376   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.913080   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:02.913106   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.464433   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:05.478249   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:05.478326   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:05.520742   80762 cri.go:89] found id: ""
	I0612 21:42:05.520765   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.520772   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:05.520778   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:05.520840   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:05.564864   80762 cri.go:89] found id: ""
	I0612 21:42:05.564896   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.564904   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:05.564911   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:05.564956   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:05.602917   80762 cri.go:89] found id: ""
	I0612 21:42:05.602942   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.602950   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:05.602956   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:05.603040   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:05.645073   80762 cri.go:89] found id: ""
	I0612 21:42:05.645104   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.645112   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:05.645119   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:05.645166   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:05.684133   80762 cri.go:89] found id: ""
	I0612 21:42:05.684165   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.684176   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:05.684184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:05.684249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:05.721461   80762 cri.go:89] found id: ""
	I0612 21:42:05.721489   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.721499   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:05.721506   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:05.721573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:05.756710   80762 cri.go:89] found id: ""
	I0612 21:42:05.756744   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.756755   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:05.756763   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:05.756814   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:05.792182   80762 cri.go:89] found id: ""
	I0612 21:42:05.792210   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.792220   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:05.792230   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:05.792245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:05.836597   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:05.836632   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.888704   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:05.888742   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:05.903354   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:05.903387   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:05.976146   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:05.976169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:05.976183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:06.664789   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.666830   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.013885   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.512997   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:09.316398   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.317014   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.559612   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:08.573592   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:08.573648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:08.613347   80762 cri.go:89] found id: ""
	I0612 21:42:08.613373   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.613381   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:08.613387   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:08.613449   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:08.650606   80762 cri.go:89] found id: ""
	I0612 21:42:08.650634   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.650643   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:08.650648   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:08.650692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:08.687056   80762 cri.go:89] found id: ""
	I0612 21:42:08.687087   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.687097   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:08.687105   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:08.687191   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:08.723112   80762 cri.go:89] found id: ""
	I0612 21:42:08.723138   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.723146   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:08.723161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:08.723238   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:08.764772   80762 cri.go:89] found id: ""
	I0612 21:42:08.764801   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.764812   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:08.764820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:08.764888   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:08.801914   80762 cri.go:89] found id: ""
	I0612 21:42:08.801944   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.801954   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:08.801962   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:08.802025   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:08.837991   80762 cri.go:89] found id: ""
	I0612 21:42:08.838017   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.838025   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:08.838030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:08.838084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:08.874977   80762 cri.go:89] found id: ""
	I0612 21:42:08.875016   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.875027   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:08.875039   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:08.875058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:08.931628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:08.931659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:08.946763   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:08.946791   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:09.028039   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:09.028061   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:09.028079   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:09.104350   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:09.104406   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.645114   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:11.659382   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:11.659455   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:11.702205   80762 cri.go:89] found id: ""
	I0612 21:42:11.702236   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.702246   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:11.702254   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:11.702309   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:11.748328   80762 cri.go:89] found id: ""
	I0612 21:42:11.748350   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.748357   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:11.748362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:11.748408   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:11.788980   80762 cri.go:89] found id: ""
	I0612 21:42:11.789009   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.789020   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:11.789027   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:11.789083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:11.829886   80762 cri.go:89] found id: ""
	I0612 21:42:11.829910   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.829920   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:11.829928   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:11.830006   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:11.870088   80762 cri.go:89] found id: ""
	I0612 21:42:11.870120   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.870131   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:11.870138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:11.870201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:11.907862   80762 cri.go:89] found id: ""
	I0612 21:42:11.907893   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.907905   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:11.907913   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:11.907973   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:11.947773   80762 cri.go:89] found id: ""
	I0612 21:42:11.947798   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.947808   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:11.947816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:11.947876   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:11.987806   80762 cri.go:89] found id: ""
	I0612 21:42:11.987837   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.987848   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:11.987859   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:11.987878   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:12.043451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:12.043481   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:12.057946   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:12.057980   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:12.134265   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:12.134298   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:12.134310   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:12.221276   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:12.221315   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.165305   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.165846   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.012728   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512292   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512327   80243 pod_ready.go:81] duration metric: took 4m0.006424182s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:13.512336   80243 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0612 21:42:13.512343   80243 pod_ready.go:38] duration metric: took 4m5.595554955s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:13.512359   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:42:13.512383   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:13.512428   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:13.571855   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:13.571882   80243 cri.go:89] found id: ""
	I0612 21:42:13.571892   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:13.571942   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.576505   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:13.576557   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:13.614768   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:13.614792   80243 cri.go:89] found id: ""
	I0612 21:42:13.614799   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:13.614847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.619276   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:13.619342   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:13.662832   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:13.662856   80243 cri.go:89] found id: ""
	I0612 21:42:13.662866   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:13.662931   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.667956   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:13.668031   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:13.710456   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.710479   80243 cri.go:89] found id: ""
	I0612 21:42:13.710487   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:13.710540   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.715411   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:13.715480   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:13.759924   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.759952   80243 cri.go:89] found id: ""
	I0612 21:42:13.759965   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:13.760027   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.764854   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:13.764919   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:13.804802   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:13.804826   80243 cri.go:89] found id: ""
	I0612 21:42:13.804834   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:13.804891   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.809421   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:13.809489   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:13.846580   80243 cri.go:89] found id: ""
	I0612 21:42:13.846615   80243 logs.go:276] 0 containers: []
	W0612 21:42:13.846625   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:13.846633   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:13.846685   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:13.893480   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.893504   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:13.893510   80243 cri.go:89] found id: ""
	I0612 21:42:13.893523   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:13.893571   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.898530   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.905072   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:13.905100   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.942165   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:13.942195   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.981852   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:13.981882   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:14.018431   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:14.018457   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:14.067616   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:14.067645   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:14.082853   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:14.082886   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:14.220156   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:14.220188   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:14.274395   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:14.274430   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:14.319087   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:14.319116   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:14.834792   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:14.834831   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:14.893213   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:14.893252   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:14.957423   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:14.957466   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:15.013756   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:15.013803   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.318558   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:15.318904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:14.760949   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:14.775242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:14.775356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:14.818818   80762 cri.go:89] found id: ""
	I0612 21:42:14.818847   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.818856   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:14.818863   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:14.818931   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:14.859106   80762 cri.go:89] found id: ""
	I0612 21:42:14.859146   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.859157   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:14.859164   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:14.859247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:14.894993   80762 cri.go:89] found id: ""
	I0612 21:42:14.895016   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.895026   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:14.895037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:14.895087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:14.943534   80762 cri.go:89] found id: ""
	I0612 21:42:14.943561   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.943572   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:14.943579   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:14.943645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:14.985243   80762 cri.go:89] found id: ""
	I0612 21:42:14.985267   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.985274   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:14.985280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:14.985328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:15.029253   80762 cri.go:89] found id: ""
	I0612 21:42:15.029286   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.029297   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:15.029305   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:15.029371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:15.063471   80762 cri.go:89] found id: ""
	I0612 21:42:15.063499   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.063510   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:15.063517   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:15.063580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:15.101152   80762 cri.go:89] found id: ""
	I0612 21:42:15.101181   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.101201   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:15.101212   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:15.101227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:15.178398   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:15.178416   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:15.178429   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:15.255420   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:15.255468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:15.295492   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:15.295519   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:15.345010   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:15.345051   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:15.166546   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.666141   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:19.672626   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.561453   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.579672   80243 api_server.go:72] duration metric: took 4m17.376220984s to wait for apiserver process to appear ...
	I0612 21:42:17.579702   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:42:17.579741   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.579787   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.620290   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:17.620318   80243 cri.go:89] found id: ""
	I0612 21:42:17.620325   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:17.620387   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.624598   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.624658   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.665957   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:17.665985   80243 cri.go:89] found id: ""
	I0612 21:42:17.665995   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:17.666056   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.671143   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.671222   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:17.717377   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.717396   80243 cri.go:89] found id: ""
	I0612 21:42:17.717404   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:17.717459   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.721710   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:17.721774   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:17.762712   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:17.762739   80243 cri.go:89] found id: ""
	I0612 21:42:17.762749   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:17.762807   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.767258   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:17.767329   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:17.803905   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:17.803956   80243 cri.go:89] found id: ""
	I0612 21:42:17.803969   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:17.804034   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.808260   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:17.808323   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:17.847402   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:17.847432   80243 cri.go:89] found id: ""
	I0612 21:42:17.847441   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:17.847502   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.851685   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:17.851757   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:17.897026   80243 cri.go:89] found id: ""
	I0612 21:42:17.897051   80243 logs.go:276] 0 containers: []
	W0612 21:42:17.897059   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:17.897065   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:17.897122   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:17.953764   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:17.953793   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:17.953799   80243 cri.go:89] found id: ""
	I0612 21:42:17.953808   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:17.953875   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.959822   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.965103   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:17.965127   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:18.089205   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:18.089229   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:18.153823   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:18.153876   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:18.198010   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.198037   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.255380   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.255410   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.271692   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:18.271725   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:18.318018   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:18.318049   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:18.379352   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:18.379386   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:18.437854   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:18.437884   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:18.487618   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.487650   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.934735   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.934784   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:18.983010   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:18.983050   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:19.043569   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:19.043605   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.819422   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:20.315423   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.862640   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.879256   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.879333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.918910   80762 cri.go:89] found id: ""
	I0612 21:42:17.918940   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.918951   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:17.918958   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.919018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.959701   80762 cri.go:89] found id: ""
	I0612 21:42:17.959726   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.959734   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:17.959739   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.959820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:18.005096   80762 cri.go:89] found id: ""
	I0612 21:42:18.005125   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.005142   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:18.005150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:18.005211   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:18.046877   80762 cri.go:89] found id: ""
	I0612 21:42:18.046907   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.046919   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:18.046927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:18.046992   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:18.087907   80762 cri.go:89] found id: ""
	I0612 21:42:18.087934   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.087946   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:18.087953   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:18.088016   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:18.139423   80762 cri.go:89] found id: ""
	I0612 21:42:18.139453   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.139464   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:18.139473   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:18.139535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:18.180433   80762 cri.go:89] found id: ""
	I0612 21:42:18.180459   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.180469   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:18.180476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:18.180537   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:18.220966   80762 cri.go:89] found id: ""
	I0612 21:42:18.220996   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.221005   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:18.221015   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.221033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.276006   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.276031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.290975   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:18.291026   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:18.369318   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:18.369345   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.369359   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.451336   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.451380   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.016353   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:21.030544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.030611   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.072558   80762 cri.go:89] found id: ""
	I0612 21:42:21.072583   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.072591   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:21.072596   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.072649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.106320   80762 cri.go:89] found id: ""
	I0612 21:42:21.106352   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.106364   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:21.106372   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.106431   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.139155   80762 cri.go:89] found id: ""
	I0612 21:42:21.139201   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.139212   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:21.139220   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.139285   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.178731   80762 cri.go:89] found id: ""
	I0612 21:42:21.178762   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.178772   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:21.178779   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.178838   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.213606   80762 cri.go:89] found id: ""
	I0612 21:42:21.213635   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.213645   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:21.213652   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.213707   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.250966   80762 cri.go:89] found id: ""
	I0612 21:42:21.250993   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.251009   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:21.251017   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.251084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.289434   80762 cri.go:89] found id: ""
	I0612 21:42:21.289457   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.289465   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.289474   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:21.289520   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:21.329028   80762 cri.go:89] found id: ""
	I0612 21:42:21.329058   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.329069   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:21.329080   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:21.329098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:21.342621   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:21.342648   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:21.418742   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:21.418766   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:21.418779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:21.493909   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:21.493944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.534693   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:21.534723   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.165337   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.166122   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:21.581443   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:42:21.586756   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:42:21.587670   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:42:21.587691   80243 api_server.go:131] duration metric: took 4.007982669s to wait for apiserver health ...
	I0612 21:42:21.587699   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:42:21.587720   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.587761   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.627942   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:21.627965   80243 cri.go:89] found id: ""
	I0612 21:42:21.627974   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:21.628036   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.632308   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.632380   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.674453   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:21.674474   80243 cri.go:89] found id: ""
	I0612 21:42:21.674482   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:21.674539   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.679303   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.679376   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.717454   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:21.717483   80243 cri.go:89] found id: ""
	I0612 21:42:21.717492   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:21.717555   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.722113   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.722176   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.758752   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:21.758780   80243 cri.go:89] found id: ""
	I0612 21:42:21.758790   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:21.758847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.763397   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.763465   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.802552   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:21.802574   80243 cri.go:89] found id: ""
	I0612 21:42:21.802583   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:21.802641   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.807570   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.807633   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.855093   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:21.855118   80243 cri.go:89] found id: ""
	I0612 21:42:21.855128   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:21.855212   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.860163   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.860231   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.907934   80243 cri.go:89] found id: ""
	I0612 21:42:21.907960   80243 logs.go:276] 0 containers: []
	W0612 21:42:21.907969   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.907977   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:21.908046   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:21.950085   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:21.950114   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:21.950120   80243 cri.go:89] found id: ""
	I0612 21:42:21.950128   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:21.950186   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.955633   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.960017   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:21.960038   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:22.015659   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:22.015708   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:22.074063   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:22.074093   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:22.113545   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:22.113581   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:22.152550   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:22.152583   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:22.556816   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:22.556856   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:22.602506   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:22.602542   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.655545   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:22.655577   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:22.775731   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:22.775775   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:22.827447   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:22.827476   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:22.864866   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:22.864898   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:22.903885   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:22.903912   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:22.947166   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:22.947214   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:25.472711   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:42:25.472743   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.472750   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.472755   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.472761   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.472765   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.472770   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.472777   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.472783   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.472794   80243 system_pods.go:74] duration metric: took 3.885088008s to wait for pod list to return data ...
	I0612 21:42:25.472803   80243 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:42:25.475046   80243 default_sa.go:45] found service account: "default"
	I0612 21:42:25.475072   80243 default_sa.go:55] duration metric: took 2.260179ms for default service account to be created ...
	I0612 21:42:25.475082   80243 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:42:25.479903   80243 system_pods.go:86] 8 kube-system pods found
	I0612 21:42:25.479925   80243 system_pods.go:89] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.479931   80243 system_pods.go:89] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.479935   80243 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.479940   80243 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.479944   80243 system_pods.go:89] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.479950   80243 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.479959   80243 system_pods.go:89] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.479969   80243 system_pods.go:89] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.479979   80243 system_pods.go:126] duration metric: took 4.890624ms to wait for k8s-apps to be running ...
	I0612 21:42:25.479990   80243 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:42:25.480037   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:25.496529   80243 system_svc.go:56] duration metric: took 16.534285ms WaitForService to wait for kubelet
	I0612 21:42:25.496549   80243 kubeadm.go:576] duration metric: took 4m25.293104149s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:42:25.496565   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:42:25.499277   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:42:25.499293   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:42:25.499304   80243 node_conditions.go:105] duration metric: took 2.734965ms to run NodePressure ...
	I0612 21:42:25.499314   80243 start.go:240] waiting for startup goroutines ...
	I0612 21:42:25.499320   80243 start.go:245] waiting for cluster config update ...
	I0612 21:42:25.499339   80243 start.go:254] writing updated cluster config ...
	I0612 21:42:25.499628   80243 ssh_runner.go:195] Run: rm -f paused
	I0612 21:42:25.547780   80243 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:42:25.549693   80243 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-376087" cluster and "default" namespace by default
	I0612 21:42:22.317078   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.317826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:26.818102   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.086466   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:24.101820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:24.101877   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:24.145732   80762 cri.go:89] found id: ""
	I0612 21:42:24.145757   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.145767   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:24.145774   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:24.145832   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:24.182765   80762 cri.go:89] found id: ""
	I0612 21:42:24.182788   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.182795   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:24.182801   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:24.182889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:24.235093   80762 cri.go:89] found id: ""
	I0612 21:42:24.235121   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.235129   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:24.235134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:24.235208   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:24.269788   80762 cri.go:89] found id: ""
	I0612 21:42:24.269809   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.269816   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:24.269822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:24.269867   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:24.306594   80762 cri.go:89] found id: ""
	I0612 21:42:24.306620   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.306628   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:24.306634   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:24.306693   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:24.343766   80762 cri.go:89] found id: ""
	I0612 21:42:24.343786   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.343795   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:24.343802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:24.343858   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:24.384417   80762 cri.go:89] found id: ""
	I0612 21:42:24.384447   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.384457   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:24.384464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:24.384524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:24.424935   80762 cri.go:89] found id: ""
	I0612 21:42:24.424958   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.424965   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:24.424974   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:24.424988   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:24.499737   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:24.499771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:24.537631   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:24.537667   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:24.593743   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:24.593779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:24.608078   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:24.608107   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:24.679729   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.180828   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:27.195484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:27.195552   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:27.235725   80762 cri.go:89] found id: ""
	I0612 21:42:27.235750   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.235761   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:27.235768   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:27.235816   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:27.279763   80762 cri.go:89] found id: ""
	I0612 21:42:27.279795   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.279806   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:27.279814   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:27.279875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:27.320510   80762 cri.go:89] found id: ""
	I0612 21:42:27.320534   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.320543   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:27.320554   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:27.320641   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:27.359195   80762 cri.go:89] found id: ""
	I0612 21:42:27.359227   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.359239   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:27.359247   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:27.359312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:27.394977   80762 cri.go:89] found id: ""
	I0612 21:42:27.395004   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.395015   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:27.395033   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:27.395099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:27.431905   80762 cri.go:89] found id: ""
	I0612 21:42:27.431925   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.431933   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:27.431945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:27.431990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:27.469929   80762 cri.go:89] found id: ""
	I0612 21:42:27.469954   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.469961   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:27.469967   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:27.470024   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:27.505128   80762 cri.go:89] found id: ""
	I0612 21:42:27.505153   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.505160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:27.505169   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:27.505180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:27.556739   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:27.556771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:27.572730   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:27.572757   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:27.646797   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.646819   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:27.646836   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:27.726554   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:27.726599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:26.665496   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.166323   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.316302   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:31.316334   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:30.268770   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:30.282575   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:30.282635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:30.321243   80762 cri.go:89] found id: ""
	I0612 21:42:30.321276   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.321288   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:30.321295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:30.321342   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:30.359403   80762 cri.go:89] found id: ""
	I0612 21:42:30.359432   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.359443   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:30.359451   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:30.359505   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:30.395967   80762 cri.go:89] found id: ""
	I0612 21:42:30.396006   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.396015   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:30.396028   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:30.396087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:30.438093   80762 cri.go:89] found id: ""
	I0612 21:42:30.438123   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.438132   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:30.438138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:30.438192   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:30.476859   80762 cri.go:89] found id: ""
	I0612 21:42:30.476888   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.476898   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:30.476905   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:30.476968   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:30.512998   80762 cri.go:89] found id: ""
	I0612 21:42:30.513026   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.513037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:30.513045   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:30.513106   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:30.548822   80762 cri.go:89] found id: ""
	I0612 21:42:30.548847   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.548855   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:30.548861   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:30.548908   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:30.584385   80762 cri.go:89] found id: ""
	I0612 21:42:30.584417   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.584426   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:30.584439   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:30.584454   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:30.685995   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:30.686019   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:30.686030   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:30.778789   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:30.778827   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:30.819467   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:30.819511   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:30.872563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:30.872599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:31.659828   80404 pod_ready.go:81] duration metric: took 4m0.000909177s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:31.659857   80404 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:42:31.659875   80404 pod_ready.go:38] duration metric: took 4m13.021158077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:31.659904   80404 kubeadm.go:591] duration metric: took 4m20.257268424s to restartPrimaryControlPlane
	W0612 21:42:31.659968   80404 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:31.660002   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:33.316457   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:35.316525   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:33.387831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:33.401663   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:33.401740   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:33.439690   80762 cri.go:89] found id: ""
	I0612 21:42:33.439723   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.439735   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:33.439743   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:33.439805   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:33.480330   80762 cri.go:89] found id: ""
	I0612 21:42:33.480357   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.480365   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:33.480371   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:33.480422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:33.520367   80762 cri.go:89] found id: ""
	I0612 21:42:33.520396   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.520407   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:33.520415   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:33.520476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:33.556859   80762 cri.go:89] found id: ""
	I0612 21:42:33.556892   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.556904   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:33.556911   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:33.556963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:33.595982   80762 cri.go:89] found id: ""
	I0612 21:42:33.596014   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.596024   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:33.596030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:33.596091   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:33.630942   80762 cri.go:89] found id: ""
	I0612 21:42:33.630974   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.630986   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:33.630994   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:33.631055   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:33.671649   80762 cri.go:89] found id: ""
	I0612 21:42:33.671676   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.671684   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:33.671690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:33.671734   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:33.716664   80762 cri.go:89] found id: ""
	I0612 21:42:33.716690   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.716700   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:33.716711   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:33.716726   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:33.734168   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:33.734198   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:33.826469   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:33.826491   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:33.826507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:33.915109   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:33.915142   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:33.957969   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:33.958007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:36.515258   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:36.529603   80762 kubeadm.go:591] duration metric: took 4m4.234271001s to restartPrimaryControlPlane
	W0612 21:42:36.529688   80762 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:36.529719   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:37.316720   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:39.317633   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.816783   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.545629   80762 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.01588354s)
	I0612 21:42:41.545734   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:41.561025   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:42:41.572788   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:42:41.583027   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:42:41.583052   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:42:41.583095   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:42:41.593433   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:42:41.593502   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:42:41.603944   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:42:41.613382   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:42:41.613432   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:42:41.622874   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.632270   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:42:41.632370   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.642072   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:42:41.652120   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:42:41.652194   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:42:41.662684   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:42:41.894903   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:42:43.817122   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:45.817164   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:47.817201   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:50.316134   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:52.317090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:54.318066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:56.816196   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:58.817948   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:01.316826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:03.728120   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.068094257s)
	I0612 21:43:03.728183   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:03.744990   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:03.755365   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:03.765154   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:03.765181   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:03.765226   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:03.775246   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:03.775304   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:03.785389   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:03.794999   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:03.795046   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:03.804771   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.814137   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:03.814187   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.824449   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:03.833631   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:03.833687   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:03.843203   80404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:03.895827   80404 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:03.895927   80404 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:04.040495   80404 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:04.040666   80404 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:04.040822   80404 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:04.252894   80404 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:04.254835   80404 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:04.254952   80404 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:04.255060   80404 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:04.255219   80404 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:04.255296   80404 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:04.255399   80404 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:04.255490   80404 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:04.255589   80404 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:04.255692   80404 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:04.255794   80404 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:04.255885   80404 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:04.255923   80404 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:04.255978   80404 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:04.460505   80404 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:04.640215   80404 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:04.722455   80404 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:04.862670   80404 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:05.112478   80404 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:05.113163   80404 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:05.115573   80404 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:03.817386   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:06.317207   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:05.117650   80404 out.go:204]   - Booting up control plane ...
	I0612 21:43:05.117758   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:05.117887   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:05.119410   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:05.139412   80404 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:05.139504   80404 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:05.139575   80404 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:05.268539   80404 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:05.268636   80404 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:43:05.771267   80404 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.898809ms
	I0612 21:43:05.771364   80404 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:43:11.274484   80404 kubeadm.go:309] [api-check] The API server is healthy after 5.503111655s
	I0612 21:43:11.291273   80404 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:43:11.319349   80404 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:43:11.357447   80404 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:43:11.357709   80404 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-591460 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:43:11.368936   80404 kubeadm.go:309] [bootstrap-token] Using token: 0iiegq.ujvrnknfmyshffxu
	I0612 21:43:08.816875   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:10.817031   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:11.370411   80404 out.go:204]   - Configuring RBAC rules ...
	I0612 21:43:11.370567   80404 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:43:11.375891   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:43:11.388345   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:43:11.392726   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:43:11.396867   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:43:11.401212   80404 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:43:11.683506   80404 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:43:12.114832   80404 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:43:12.683696   80404 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:43:12.683724   80404 kubeadm.go:309] 
	I0612 21:43:12.683811   80404 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:43:12.683823   80404 kubeadm.go:309] 
	I0612 21:43:12.683938   80404 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:43:12.683958   80404 kubeadm.go:309] 
	I0612 21:43:12.684002   80404 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:43:12.684070   80404 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:43:12.684129   80404 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:43:12.684146   80404 kubeadm.go:309] 
	I0612 21:43:12.684232   80404 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:43:12.684247   80404 kubeadm.go:309] 
	I0612 21:43:12.684317   80404 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:43:12.684330   80404 kubeadm.go:309] 
	I0612 21:43:12.684398   80404 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:43:12.684502   80404 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:43:12.684595   80404 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:43:12.684604   80404 kubeadm.go:309] 
	I0612 21:43:12.684700   80404 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:43:12.684807   80404 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:43:12.684816   80404 kubeadm.go:309] 
	I0612 21:43:12.684915   80404 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685061   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:43:12.685105   80404 kubeadm.go:309] 	--control-plane 
	I0612 21:43:12.685116   80404 kubeadm.go:309] 
	I0612 21:43:12.685237   80404 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:43:12.685248   80404 kubeadm.go:309] 
	I0612 21:43:12.685352   80404 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685509   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:43:12.685622   80404 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:43:12.685831   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:43:12.685848   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:43:12.687835   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:43:12.689100   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:43:12.700384   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:43:12.720228   80404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:43:12.720305   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.720330   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-591460 minikube.k8s.io/updated_at=2024_06_12T21_43_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=embed-certs-591460 minikube.k8s.io/primary=true
	I0612 21:43:12.751866   80404 ops.go:34] apiserver oom_adj: -16
	I0612 21:43:12.927644   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.428393   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.928221   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:14.428286   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.817125   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:15.316899   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:14.928273   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.428431   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.927968   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.428202   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.927882   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.428544   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.927844   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.428385   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.928105   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:19.428421   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.317080   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.317419   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:21.816670   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.928638   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.428310   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.928565   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.428377   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.928158   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.428356   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.927863   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.427955   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.928226   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.427823   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.928404   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.428367   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.514417   80404 kubeadm.go:1107] duration metric: took 12.794169259s to wait for elevateKubeSystemPrivileges
	W0612 21:43:25.514460   80404 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:43:25.514470   80404 kubeadm.go:393] duration metric: took 5m14.162212832s to StartCluster
	I0612 21:43:25.514490   80404 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.514576   80404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:43:25.518597   80404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.518811   80404 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:43:25.520571   80404 out.go:177] * Verifying Kubernetes components...
	I0612 21:43:25.518903   80404 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:43:25.519030   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:43:25.521967   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:43:25.522001   80404 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-591460"
	I0612 21:43:25.522043   80404 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-591460"
	W0612 21:43:25.522056   80404 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:43:25.522053   80404 addons.go:69] Setting default-storageclass=true in profile "embed-certs-591460"
	I0612 21:43:25.522089   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522100   80404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-591460"
	I0612 21:43:25.522089   80404 addons.go:69] Setting metrics-server=true in profile "embed-certs-591460"
	I0612 21:43:25.522158   80404 addons.go:234] Setting addon metrics-server=true in "embed-certs-591460"
	W0612 21:43:25.522170   80404 addons.go:243] addon metrics-server should already be in state true
	I0612 21:43:25.522196   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522502   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522509   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522532   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522535   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522585   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522611   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.538989   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0612 21:43:25.539032   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0612 21:43:25.539591   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.539592   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.540199   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540222   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540293   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540323   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540610   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.540736   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.541265   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541281   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541312   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.541431   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.542393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0612 21:43:25.543042   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.543604   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.543643   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.543997   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.544208   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.547823   80404 addons.go:234] Setting addon default-storageclass=true in "embed-certs-591460"
	W0612 21:43:25.547849   80404 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:43:25.547878   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.548237   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.548272   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.558486   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46589
	I0612 21:43:25.558934   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.559936   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.559967   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.560387   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.560600   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.560728   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0612 21:43:25.561116   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.561595   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.561610   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.561928   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.562198   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.562832   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565065   80404 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:43:25.563946   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0612 21:43:25.566521   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:43:25.566535   80404 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:43:25.566582   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.568114   80404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:43:24.316660   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:25.810857   80157 pod_ready.go:81] duration metric: took 4m0.000926725s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	E0612 21:43:25.810888   80157 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:43:25.810936   80157 pod_ready.go:38] duration metric: took 4m14.539121336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.810971   80157 kubeadm.go:591] duration metric: took 4m21.56451584s to restartPrimaryControlPlane
	W0612 21:43:25.811042   80157 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:43:25.811074   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:43:25.567032   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.569772   80404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:25.569794   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:43:25.569812   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.570271   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.570291   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.570363   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.570698   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.571498   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.571514   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.571539   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.571691   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.571861   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.572032   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.572851   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.572894   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.573962   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574403   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.574429   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574762   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.574974   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.575164   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.575464   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.589637   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0612 21:43:25.590155   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.591035   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.591059   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.591596   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.591845   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.593885   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.594095   80404 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:25.594112   80404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:43:25.594131   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.597769   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.598379   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598434   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.598635   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.598766   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.598860   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.762134   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:43:25.818663   80404 node_ready.go:35] waiting up to 6m0s for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830753   80404 node_ready.go:49] node "embed-certs-591460" has status "Ready":"True"
	I0612 21:43:25.830780   80404 node_ready.go:38] duration metric: took 12.086962ms for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830792   80404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.841084   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:25.929395   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:43:25.929427   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:43:26.001489   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:26.016234   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:43:26.016275   80404 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:43:26.030851   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:26.062707   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:26.062741   80404 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:43:26.157461   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:27.281342   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.279809959s)
	I0612 21:43:27.281364   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.250478112s)
	I0612 21:43:27.281392   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281405   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281408   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281420   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281712   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281730   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281739   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281748   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281861   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281916   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281933   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281942   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.283567   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283582   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283592   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283597   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.283728   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283740   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.324600   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.324625   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.324937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.324941   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.324965   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.366096   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:27.366126   80404 pod_ready.go:81] duration metric: took 1.52501871s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.366139   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.530900   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.373391416s)
	I0612 21:43:27.530973   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.530987   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.531382   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.531399   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.531406   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.531419   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.531428   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.533199   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.533212   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.533226   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.533238   80404 addons.go:475] Verifying addon metrics-server=true in "embed-certs-591460"
	I0612 21:43:27.534895   80404 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:43:27.536129   80404 addons.go:510] duration metric: took 2.017228253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:43:28.373835   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.373862   80404 pod_ready.go:81] duration metric: took 1.007715736s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.373870   80404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379042   80404 pod_ready.go:92] pod "etcd-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.379065   80404 pod_ready.go:81] duration metric: took 5.188395ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379078   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384218   80404 pod_ready.go:92] pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.384233   80404 pod_ready.go:81] duration metric: took 5.148944ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384241   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389023   80404 pod_ready.go:92] pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.389046   80404 pod_ready.go:81] duration metric: took 4.78947ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389056   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623880   80404 pod_ready.go:92] pod "kube-proxy-5l2wz" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.623902   80404 pod_ready.go:81] duration metric: took 234.83854ms for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623910   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022477   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:29.022508   80404 pod_ready.go:81] duration metric: took 398.590821ms for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022522   80404 pod_ready.go:38] duration metric: took 3.191712664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:29.022539   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:43:29.022602   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:43:29.038776   80404 api_server.go:72] duration metric: took 3.51993276s to wait for apiserver process to appear ...
	I0612 21:43:29.038805   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:43:29.038827   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:43:29.045455   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:43:29.047050   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:43:29.047072   80404 api_server.go:131] duration metric: took 8.260077ms to wait for apiserver health ...
	I0612 21:43:29.047080   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:43:29.226569   80404 system_pods.go:59] 9 kube-system pods found
	I0612 21:43:29.226603   80404 system_pods.go:61] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.226611   80404 system_pods.go:61] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.226618   80404 system_pods.go:61] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.226624   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.226631   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.226636   80404 system_pods.go:61] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.226642   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.226652   80404 system_pods.go:61] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.226659   80404 system_pods.go:61] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.226671   80404 system_pods.go:74] duration metric: took 179.583899ms to wait for pod list to return data ...
	I0612 21:43:29.226684   80404 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:43:29.422244   80404 default_sa.go:45] found service account: "default"
	I0612 21:43:29.422278   80404 default_sa.go:55] duration metric: took 195.585835ms for default service account to be created ...
	I0612 21:43:29.422290   80404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:43:29.626614   80404 system_pods.go:86] 9 kube-system pods found
	I0612 21:43:29.626650   80404 system_pods.go:89] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.626659   80404 system_pods.go:89] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.626667   80404 system_pods.go:89] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.626673   80404 system_pods.go:89] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.626680   80404 system_pods.go:89] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.626687   80404 system_pods.go:89] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.626693   80404 system_pods.go:89] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.626703   80404 system_pods.go:89] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.626714   80404 system_pods.go:89] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.626725   80404 system_pods.go:126] duration metric: took 204.428087ms to wait for k8s-apps to be running ...
	I0612 21:43:29.626737   80404 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:43:29.626793   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:29.642423   80404 system_svc.go:56] duration metric: took 15.67694ms WaitForService to wait for kubelet
	I0612 21:43:29.642457   80404 kubeadm.go:576] duration metric: took 4.123619864s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:43:29.642481   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:43:29.825804   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:43:29.825833   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:43:29.825846   80404 node_conditions.go:105] duration metric: took 183.359091ms to run NodePressure ...
	I0612 21:43:29.825860   80404 start.go:240] waiting for startup goroutines ...
	I0612 21:43:29.825868   80404 start.go:245] waiting for cluster config update ...
	I0612 21:43:29.825881   80404 start.go:254] writing updated cluster config ...
	I0612 21:43:29.826229   80404 ssh_runner.go:195] Run: rm -f paused
	I0612 21:43:29.878580   80404 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:43:29.880438   80404 out.go:177] * Done! kubectl is now configured to use "embed-certs-591460" cluster and "default" namespace by default
	I0612 21:43:57.924825   80157 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.113719509s)
	I0612 21:43:57.924912   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:57.942507   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:57.953901   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:57.964374   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:57.964396   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:57.964439   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:57.974281   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:57.974366   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:57.985000   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:57.995268   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:57.995346   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:58.005482   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.015598   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:58.015659   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.028582   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:58.038706   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:58.038756   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:58.051818   80157 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:58.110576   80157 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:58.110645   80157 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:58.274454   80157 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:58.274625   80157 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:58.274751   80157 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:58.484837   80157 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:58.486643   80157 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:58.486753   80157 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:58.486845   80157 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:58.486963   80157 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:58.487058   80157 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:58.487192   80157 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:58.487283   80157 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:58.487368   80157 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:58.487452   80157 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:58.487559   80157 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:58.487653   80157 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:58.487728   80157 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:58.487826   80157 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:58.644916   80157 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:58.789369   80157 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:58.924153   80157 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:59.044332   80157 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:59.352910   80157 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:59.353462   80157 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:59.356967   80157 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:59.359470   80157 out.go:204]   - Booting up control plane ...
	I0612 21:43:59.359596   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:59.359687   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:59.359792   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:59.378280   80157 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:59.379149   80157 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:59.379240   80157 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:59.521694   80157 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:59.521775   80157 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:44:00.036696   80157 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 514.972931ms
	I0612 21:44:00.036836   80157 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:44:05.539363   80157 kubeadm.go:309] [api-check] The API server is healthy after 5.502859715s
	I0612 21:44:05.552779   80157 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:44:05.567296   80157 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:44:05.603398   80157 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:44:05.603707   80157 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-087875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:44:05.619311   80157 kubeadm.go:309] [bootstrap-token] Using token: x2knjj.1kuv2wdowwsbztfg
	I0612 21:44:05.621026   80157 out.go:204]   - Configuring RBAC rules ...
	I0612 21:44:05.621180   80157 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:44:05.628474   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:44:05.642438   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:44:05.647606   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:44:05.651982   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:44:05.656129   80157 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:44:05.947680   80157 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:44:06.430716   80157 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:44:06.950446   80157 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:44:06.951688   80157 kubeadm.go:309] 
	I0612 21:44:06.951771   80157 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:44:06.951782   80157 kubeadm.go:309] 
	I0612 21:44:06.951857   80157 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:44:06.951866   80157 kubeadm.go:309] 
	I0612 21:44:06.951919   80157 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:44:06.952007   80157 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:44:06.952083   80157 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:44:06.952094   80157 kubeadm.go:309] 
	I0612 21:44:06.952160   80157 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:44:06.952172   80157 kubeadm.go:309] 
	I0612 21:44:06.952222   80157 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:44:06.952232   80157 kubeadm.go:309] 
	I0612 21:44:06.952285   80157 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:44:06.952375   80157 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:44:06.952460   80157 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:44:06.952476   80157 kubeadm.go:309] 
	I0612 21:44:06.952612   80157 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:44:06.952711   80157 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:44:06.952722   80157 kubeadm.go:309] 
	I0612 21:44:06.952819   80157 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.952933   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:44:06.952963   80157 kubeadm.go:309] 	--control-plane 
	I0612 21:44:06.952985   80157 kubeadm.go:309] 
	I0612 21:44:06.953100   80157 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:44:06.953114   80157 kubeadm.go:309] 
	I0612 21:44:06.953219   80157 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.953373   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:44:06.953943   80157 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:44:06.953986   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:44:06.954003   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:44:06.956587   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:44:06.957989   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:44:06.972666   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:44:07.000720   80157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:44:07.000822   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.000839   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-087875 minikube.k8s.io/updated_at=2024_06_12T21_44_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=no-preload-087875 minikube.k8s.io/primary=true
	I0612 21:44:07.201613   80157 ops.go:34] apiserver oom_adj: -16
	I0612 21:44:07.201713   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.702791   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.201886   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.702020   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.202755   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.702683   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.202007   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.702272   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.201764   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.702383   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.201880   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.702587   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.202524   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.702498   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.202157   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.702197   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.201852   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.702444   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.201919   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.701722   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.202307   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.701823   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.202602   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.702354   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.202207   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.308654   80157 kubeadm.go:1107] duration metric: took 12.307897648s to wait for elevateKubeSystemPrivileges
	W0612 21:44:19.308699   80157 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:44:19.308709   80157 kubeadm.go:393] duration metric: took 5m15.118303799s to StartCluster
	I0612 21:44:19.308738   80157 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.308825   80157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:44:19.311295   80157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.311587   80157 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:44:19.313263   80157 out.go:177] * Verifying Kubernetes components...
	I0612 21:44:19.311693   80157 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:44:19.311780   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:44:19.315137   80157 addons.go:69] Setting storage-provisioner=true in profile "no-preload-087875"
	I0612 21:44:19.315148   80157 addons.go:69] Setting default-storageclass=true in profile "no-preload-087875"
	I0612 21:44:19.315192   80157 addons.go:234] Setting addon storage-provisioner=true in "no-preload-087875"
	I0612 21:44:19.315201   80157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-087875"
	I0612 21:44:19.315202   80157 addons.go:69] Setting metrics-server=true in profile "no-preload-087875"
	I0612 21:44:19.315240   80157 addons.go:234] Setting addon metrics-server=true in "no-preload-087875"
	W0612 21:44:19.315255   80157 addons.go:243] addon metrics-server should already be in state true
	I0612 21:44:19.315296   80157 host.go:66] Checking if "no-preload-087875" exists ...
	W0612 21:44:19.315209   80157 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:44:19.315397   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.315139   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:44:19.315636   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315666   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315653   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315698   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315731   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315750   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.331461   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40419
	I0612 21:44:19.331495   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0612 21:44:19.331924   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332019   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332446   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332466   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332580   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332603   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332866   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.332911   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.333087   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.333484   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.333508   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.334462   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0612 21:44:19.334922   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.335447   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.335474   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.335812   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.336376   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.336408   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.336657   80157 addons.go:234] Setting addon default-storageclass=true in "no-preload-087875"
	W0612 21:44:19.336675   80157 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:44:19.336701   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.337047   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.337078   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.350724   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0612 21:44:19.351308   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.351869   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.351897   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.352272   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.352503   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.354434   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0612 21:44:19.354532   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.356594   80157 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:44:19.354927   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.355284   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37489
	I0612 21:44:19.357181   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.358026   80157 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.357219   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358040   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:44:19.358048   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.358058   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.358407   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.358560   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358577   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.359024   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.359035   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.359069   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.359408   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.361013   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.361524   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.363337   80157 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:44:19.361921   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.362312   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.364713   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:44:19.364727   80157 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:44:19.364736   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.364744   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.365021   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.365260   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.365419   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.368572   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.368971   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.368988   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.369144   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.369316   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.369431   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.369538   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.377220   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0612 21:44:19.377598   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.378595   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.378621   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.378931   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.379127   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.380646   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.380844   80157 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.380857   80157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:44:19.380869   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.383763   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384201   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.384216   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384504   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.384660   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.384816   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.384956   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.516231   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:44:19.539205   80157 node_ready.go:35] waiting up to 6m0s for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546948   80157 node_ready.go:49] node "no-preload-087875" has status "Ready":"True"
	I0612 21:44:19.546972   80157 node_ready.go:38] duration metric: took 7.739123ms for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546985   80157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:19.553454   80157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562831   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.562854   80157 pod_ready.go:81] duration metric: took 9.377758ms for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562862   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568274   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.568296   80157 pod_ready.go:81] duration metric: took 5.425162ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568306   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.572960   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.572991   80157 pod_ready.go:81] duration metric: took 4.669828ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.573002   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.620522   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:44:19.620548   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:44:19.654325   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.681762   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:44:19.681800   80157 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:44:19.699701   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.774496   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:19.774526   80157 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:44:19.874891   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:20.590260   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590292   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590276   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590360   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590587   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.590634   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.590644   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.590651   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590658   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592402   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592462   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592410   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592411   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592414   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592551   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592476   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.592655   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592952   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.593069   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.593093   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.634339   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.634370   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.634813   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.634864   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.634880   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.321337   80157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.446394551s)
	I0612 21:44:21.321389   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.321403   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.321802   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:21.321827   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.321968   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322012   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.322023   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.322278   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.322294   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322305   80157 addons.go:475] Verifying addon metrics-server=true in "no-preload-087875"
	I0612 21:44:21.324652   80157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:44:21.326653   80157 addons.go:510] duration metric: took 2.01495884s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:44:21.589251   80157 pod_ready.go:92] pod "kube-proxy-lnhzt" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.589290   80157 pod_ready.go:81] duration metric: took 2.016278458s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.589305   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652083   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.652122   80157 pod_ready.go:81] duration metric: took 62.805318ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652136   80157 pod_ready.go:38] duration metric: took 2.105136343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:21.652156   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:44:21.652237   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:44:21.683110   80157 api_server.go:72] duration metric: took 2.371482611s to wait for apiserver process to appear ...
	I0612 21:44:21.683148   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:44:21.683187   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:44:21.704637   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:44:21.714032   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:44:21.714061   80157 api_server.go:131] duration metric: took 30.904631ms to wait for apiserver health ...
	I0612 21:44:21.714070   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:44:21.751484   80157 system_pods.go:59] 9 kube-system pods found
	I0612 21:44:21.751520   80157 system_pods.go:61] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:21.751526   80157 system_pods.go:61] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:21.751532   80157 system_pods.go:61] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:21.751537   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:21.751544   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:21.751548   80157 system_pods.go:61] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:21.751552   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:21.751560   80157 system_pods.go:61] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:21.751568   80157 system_pods.go:61] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:21.751581   80157 system_pods.go:74] duration metric: took 37.503399ms to wait for pod list to return data ...
	I0612 21:44:21.751595   80157 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:44:21.943440   80157 default_sa.go:45] found service account: "default"
	I0612 21:44:21.943465   80157 default_sa.go:55] duration metric: took 191.863221ms for default service account to be created ...
	I0612 21:44:21.943473   80157 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:44:22.146922   80157 system_pods.go:86] 9 kube-system pods found
	I0612 21:44:22.146960   80157 system_pods.go:89] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:22.146969   80157 system_pods.go:89] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:22.146975   80157 system_pods.go:89] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:22.146982   80157 system_pods.go:89] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:22.146988   80157 system_pods.go:89] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:22.146994   80157 system_pods.go:89] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:22.147000   80157 system_pods.go:89] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:22.147012   80157 system_pods.go:89] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:22.147030   80157 system_pods.go:89] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:22.147042   80157 system_pods.go:126] duration metric: took 203.562938ms to wait for k8s-apps to be running ...
	I0612 21:44:22.147056   80157 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:44:22.147110   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:22.167568   80157 system_svc.go:56] duration metric: took 20.500218ms WaitForService to wait for kubelet
	I0612 21:44:22.167606   80157 kubeadm.go:576] duration metric: took 2.855984791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:44:22.167627   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:44:22.343015   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:44:22.343039   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:44:22.343051   80157 node_conditions.go:105] duration metric: took 175.419211ms to run NodePressure ...
	I0612 21:44:22.343064   80157 start.go:240] waiting for startup goroutines ...
	I0612 21:44:22.343073   80157 start.go:245] waiting for cluster config update ...
	I0612 21:44:22.343085   80157 start.go:254] writing updated cluster config ...
	I0612 21:44:22.343387   80157 ssh_runner.go:195] Run: rm -f paused
	I0612 21:44:22.391092   80157 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:44:22.393268   80157 out.go:177] * Done! kubectl is now configured to use "no-preload-087875" cluster and "default" namespace by default
	I0612 21:44:37.700712   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:44:37.700862   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:44:37.702455   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:44:37.702552   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:44:37.702639   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:44:37.702749   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:44:37.702887   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:44:37.702992   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:44:37.704955   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:44:37.705032   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:44:37.705088   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:44:37.705159   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:44:37.705228   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:44:37.705289   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:44:37.705368   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:44:37.705467   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:44:37.705538   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:44:37.705620   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:44:37.705683   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:44:37.705723   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:44:37.705773   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:44:37.705816   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:44:37.705861   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:44:37.705917   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:44:37.705964   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:44:37.706062   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:44:37.706172   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:44:37.706231   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:44:37.706288   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:44:37.707753   80762 out.go:204]   - Booting up control plane ...
	I0612 21:44:37.707857   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:44:37.707931   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:44:37.707994   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:44:37.708064   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:44:37.708197   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:44:37.708251   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:44:37.708344   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708536   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708600   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708770   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708864   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709067   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709133   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709340   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709441   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709638   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709650   80762 kubeadm.go:309] 
	I0612 21:44:37.709683   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:44:37.709721   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:44:37.709728   80762 kubeadm.go:309] 
	I0612 21:44:37.709777   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:44:37.709817   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:44:37.709910   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:44:37.709917   80762 kubeadm.go:309] 
	I0612 21:44:37.710018   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:44:37.710052   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:44:37.710083   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:44:37.710089   80762 kubeadm.go:309] 
	I0612 21:44:37.710184   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:44:37.710259   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:44:37.710265   80762 kubeadm.go:309] 
	I0612 21:44:37.710359   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:44:37.710431   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:44:37.710497   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:44:37.710563   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:44:37.710607   80762 kubeadm.go:309] 
	W0612 21:44:37.710666   80762 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0612 21:44:37.710709   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:44:38.170461   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:38.186842   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:44:38.198380   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:44:38.198400   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:44:38.198454   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:44:38.208876   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:44:38.208948   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:44:38.219641   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:44:38.229622   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:44:38.229685   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:44:38.240153   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.251342   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:44:38.251401   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.262662   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:44:38.272898   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:44:38.272954   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:44:38.283213   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:44:38.501637   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:46:34.582636   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:46:34.582745   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:46:34.584702   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:46:34.584775   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:46:34.584898   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:46:34.585029   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:46:34.585172   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:46:34.585263   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:46:34.587030   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:46:34.587101   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:46:34.587160   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:46:34.587260   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:46:34.587349   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:46:34.587446   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:46:34.587521   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:46:34.587609   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:46:34.587697   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:46:34.587803   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:46:34.587886   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:46:34.588014   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:46:34.588097   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:46:34.588177   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:46:34.588268   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:46:34.588381   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:46:34.588447   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:46:34.588558   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:46:34.588659   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:46:34.588719   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:46:34.588816   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:46:34.590114   80762 out.go:204]   - Booting up control plane ...
	I0612 21:46:34.590226   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:46:34.590326   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:46:34.590444   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:46:34.590527   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:46:34.590710   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:46:34.590778   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:46:34.590847   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591054   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591149   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591411   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591508   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591743   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591846   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592108   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592205   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592395   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592403   80762 kubeadm.go:309] 
	I0612 21:46:34.592436   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:46:34.592485   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:46:34.592500   80762 kubeadm.go:309] 
	I0612 21:46:34.592535   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:46:34.592563   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:46:34.592677   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:46:34.592688   80762 kubeadm.go:309] 
	I0612 21:46:34.592820   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:46:34.592855   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:46:34.592883   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:46:34.592890   80762 kubeadm.go:309] 
	I0612 21:46:34.593007   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:46:34.593107   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:46:34.593116   80762 kubeadm.go:309] 
	I0612 21:46:34.593224   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:46:34.593342   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:46:34.593426   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:46:34.593494   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:46:34.593552   80762 kubeadm.go:393] duration metric: took 8m2.356271864s to StartCluster
	I0612 21:46:34.593558   80762 kubeadm.go:309] 
	I0612 21:46:34.593589   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:46:34.593639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:46:34.643842   80762 cri.go:89] found id: ""
	I0612 21:46:34.643876   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.643887   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:46:34.643905   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:46:34.643982   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:46:34.682878   80762 cri.go:89] found id: ""
	I0612 21:46:34.682899   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.682906   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:46:34.682912   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:46:34.682961   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:46:34.721931   80762 cri.go:89] found id: ""
	I0612 21:46:34.721955   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.721964   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:46:34.721969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:46:34.722021   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:46:34.759233   80762 cri.go:89] found id: ""
	I0612 21:46:34.759266   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.759274   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:46:34.759280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:46:34.759333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:46:34.800142   80762 cri.go:89] found id: ""
	I0612 21:46:34.800176   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.800186   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:46:34.800194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:46:34.800256   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:46:34.836746   80762 cri.go:89] found id: ""
	I0612 21:46:34.836774   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.836784   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:46:34.836791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:46:34.836850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:46:34.876108   80762 cri.go:89] found id: ""
	I0612 21:46:34.876138   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.876147   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:46:34.876153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:46:34.876202   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:46:34.912272   80762 cri.go:89] found id: ""
	I0612 21:46:34.912294   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.912301   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:46:34.912310   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:46:34.912324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:46:34.997300   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:46:34.997331   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:46:34.997347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:46:35.105602   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:46:35.105638   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:46:35.152818   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:46:35.152857   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:46:35.216504   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:46:35.216545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0612 21:46:35.239531   80762 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0612 21:46:35.239581   80762 out.go:239] * 
	W0612 21:46:35.239646   80762 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.239672   80762 out.go:239] * 
	W0612 21:46:35.240600   80762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:46:35.244822   80762 out.go:177] 
	W0612 21:46:35.246072   80762 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.246137   80762 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0612 21:46:35.246164   80762 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0612 21:46:35.247768   80762 out.go:177] 
	
	
	==> CRI-O <==
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.509457800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229340509418331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33887011-7aaf-422c-a658-900bba43397b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.510313049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6d3da85-1448-4441-8b20-0787b6cd9e73 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.510381828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6d3da85-1448-4441-8b20-0787b6cd9e73 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.510421005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f6d3da85-1448-4441-8b20-0787b6cd9e73 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.543084970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2c37cd1-5db4-44b0-8828-cf100f2bb143 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.543185140Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2c37cd1-5db4-44b0-8828-cf100f2bb143 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.544200147Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34cb6131-bdf6-47bf-9b4f-3ff1744dbb8f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.544680652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229340544656541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34cb6131-bdf6-47bf-9b4f-3ff1744dbb8f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.545294443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f098dc9-8f98-4d2b-86e7-95d8a79b00ba name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.545358804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f098dc9-8f98-4d2b-86e7-95d8a79b00ba name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.545397105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7f098dc9-8f98-4d2b-86e7-95d8a79b00ba name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.577887078Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e27f1a20-b29a-4663-8f4e-39cda6338ba6 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.577973848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e27f1a20-b29a-4663-8f4e-39cda6338ba6 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.579126938Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20446eb4-04f3-4d9c-8bad-e1555f5412ed name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.579506877Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229340579488188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20446eb4-04f3-4d9c-8bad-e1555f5412ed name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.579960595Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01924a7a-f3f5-4916-8afa-2167faf00752 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.580029801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01924a7a-f3f5-4916-8afa-2167faf00752 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.580062232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=01924a7a-f3f5-4916-8afa-2167faf00752 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.614979750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66ab7ad5-0502-4422-a7ce-70dd2c85cb0a name=/runtime.v1.RuntimeService/Version
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.615113276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66ab7ad5-0502-4422-a7ce-70dd2c85cb0a name=/runtime.v1.RuntimeService/Version
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.616274931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=580e0a2e-92d4-4a64-83a6-b11186148c4d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.616732586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229340616709899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=580e0a2e-92d4-4a64-83a6-b11186148c4d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.617266004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25746c33-24ae-4d8a-99ab-f1974ec97b13 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.617313034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25746c33-24ae-4d8a-99ab-f1974ec97b13 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:55:40 old-k8s-version-983302 crio[651]: time="2024-06-12 21:55:40.617346148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=25746c33-24ae-4d8a-99ab-f1974ec97b13 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun12 21:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056321] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044953] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.826136] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.486922] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.757887] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.131253] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.069367] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066150] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.207548] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.141383] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.298797] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +6.786115] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.069711] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.050220] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[ +13.489395] kauditd_printk_skb: 46 callbacks suppressed
	[Jun12 21:42] systemd-fstab-generator[5031]: Ignoring "noauto" option for root device
	[Jun12 21:44] systemd-fstab-generator[5305]: Ignoring "noauto" option for root device
	[  +0.065559] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:55:40 up 17 min,  0 users,  load average: 0.06, 0.03, 0.02
	Linux old-k8s-version-983302 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:134 +0x191
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]: goroutine 143 [select]:
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000633f58, 0x4f0ac20, 0xc000622dc0, 0x1, 0xc0001000c0)
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024e700, 0xc0001000c0)
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]: goroutine 126 [select]:
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000926640, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001c5740, 0x0, 0x0)
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0009461c0)
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jun 12 21:55:35 old-k8s-version-983302 kubelet[6483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jun 12 21:55:36 old-k8s-version-983302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jun 12 21:55:36 old-k8s-version-983302 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 12 21:55:36 old-k8s-version-983302 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 12 21:55:36 old-k8s-version-983302 kubelet[6492]: I0612 21:55:36.197710    6492 server.go:416] Version: v1.20.0
	Jun 12 21:55:36 old-k8s-version-983302 kubelet[6492]: I0612 21:55:36.198005    6492 server.go:837] Client rotation is on, will bootstrap in background
	Jun 12 21:55:36 old-k8s-version-983302 kubelet[6492]: I0612 21:55:36.200057    6492 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 12 21:55:36 old-k8s-version-983302 kubelet[6492]: I0612 21:55:36.201675    6492 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jun 12 21:55:36 old-k8s-version-983302 kubelet[6492]: W0612 21:55:36.201688    6492 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-983302 -n old-k8s-version-983302
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 2 (250.233581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-983302" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (435.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-12 21:58:44.034847314 +0000 UTC m=+6475.649297693
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-376087 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-376087 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.214µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-376087 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-376087 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-376087 logs -n 25: (1.287396074s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-576552 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | disable-driver-mounts-576552                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-087875             | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-376087  | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-591460            | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-983302        | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-087875                  | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-376087       | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:42 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-591460                 | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-983302             | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:57 UTC | 12 Jun 24 21:57 UTC |
	| start   | -p newest-cni-007396 --memory=2200 --alsologtostderr   | newest-cni-007396            | jenkins | v1.33.1 | 12 Jun 24 21:57 UTC | 12 Jun 24 21:58 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:58 UTC | 12 Jun 24 21:58 UTC |
	| addons  | enable metrics-server -p newest-cni-007396             | newest-cni-007396            | jenkins | v1.33.1 | 12 Jun 24 21:58 UTC | 12 Jun 24 21:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-007396                                   | newest-cni-007396            | jenkins | v1.33.1 | 12 Jun 24 21:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:58 UTC |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:57:39
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:57:39.550876   86948 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:57:39.551091   86948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:57:39.551099   86948 out.go:304] Setting ErrFile to fd 2...
	I0612 21:57:39.551103   86948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:57:39.551305   86948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:57:39.551845   86948 out.go:298] Setting JSON to false
	I0612 21:57:39.552797   86948 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9605,"bootTime":1718219855,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:57:39.552852   86948 start.go:139] virtualization: kvm guest
	I0612 21:57:39.555092   86948 out.go:177] * [newest-cni-007396] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:57:39.556394   86948 notify.go:220] Checking for updates...
	I0612 21:57:39.556401   86948 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:57:39.557868   86948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:57:39.559183   86948 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:57:39.560464   86948 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:57:39.561707   86948 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:57:39.562862   86948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:57:39.564433   86948 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:57:39.564581   86948 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:57:39.564673   86948 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:57:39.564757   86948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:57:39.602527   86948 out.go:177] * Using the kvm2 driver based on user configuration
	I0612 21:57:39.603758   86948 start.go:297] selected driver: kvm2
	I0612 21:57:39.603773   86948 start.go:901] validating driver "kvm2" against <nil>
	I0612 21:57:39.603791   86948 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:57:39.604500   86948 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:57:39.604557   86948 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:57:39.619433   86948 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:57:39.619484   86948 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0612 21:57:39.619509   86948 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0612 21:57:39.619809   86948 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0612 21:57:39.619881   86948 cni.go:84] Creating CNI manager for ""
	I0612 21:57:39.619898   86948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:57:39.619906   86948 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 21:57:39.619980   86948 start.go:340] cluster config:
	{Name:newest-cni-007396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:57:39.620120   86948 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:57:39.622163   86948 out.go:177] * Starting "newest-cni-007396" primary control-plane node in "newest-cni-007396" cluster
	I0612 21:57:39.623198   86948 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:57:39.623233   86948 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 21:57:39.623239   86948 cache.go:56] Caching tarball of preloaded images
	I0612 21:57:39.623306   86948 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:57:39.623317   86948 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 21:57:39.623400   86948 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/config.json ...
	I0612 21:57:39.623415   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/config.json: {Name:mkddd57eb5daa435dc3b365b712f5a3c8140a077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:57:39.623523   86948 start.go:360] acquireMachinesLock for newest-cni-007396: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:57:39.623548   86948 start.go:364] duration metric: took 14.312µs to acquireMachinesLock for "newest-cni-007396"
	I0612 21:57:39.623561   86948 start.go:93] Provisioning new machine with config: &{Name:newest-cni-007396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:57:39.623612   86948 start.go:125] createHost starting for "" (driver="kvm2")
	I0612 21:57:39.625081   86948 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 21:57:39.625187   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:57:39.625223   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:57:39.639278   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0612 21:57:39.639724   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:57:39.640265   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:57:39.640286   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:57:39.640560   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:57:39.640759   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:57:39.640954   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:57:39.641113   86948 start.go:159] libmachine.API.Create for "newest-cni-007396" (driver="kvm2")
	I0612 21:57:39.641148   86948 client.go:168] LocalClient.Create starting
	I0612 21:57:39.641174   86948 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 21:57:39.641200   86948 main.go:141] libmachine: Decoding PEM data...
	I0612 21:57:39.641212   86948 main.go:141] libmachine: Parsing certificate...
	I0612 21:57:39.641270   86948 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 21:57:39.641290   86948 main.go:141] libmachine: Decoding PEM data...
	I0612 21:57:39.641303   86948 main.go:141] libmachine: Parsing certificate...
	I0612 21:57:39.641319   86948 main.go:141] libmachine: Running pre-create checks...
	I0612 21:57:39.641327   86948 main.go:141] libmachine: (newest-cni-007396) Calling .PreCreateCheck
	I0612 21:57:39.641700   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetConfigRaw
	I0612 21:57:39.642164   86948 main.go:141] libmachine: Creating machine...
	I0612 21:57:39.642181   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Create
	I0612 21:57:39.642316   86948 main.go:141] libmachine: (newest-cni-007396) Creating KVM machine...
	I0612 21:57:39.643669   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found existing default KVM network
	I0612 21:57:39.644988   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.644853   86970 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b9:6b:ca} reservation:<nil>}
	I0612 21:57:39.645969   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.645912   86970 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002b8150}
	I0612 21:57:39.646019   86948 main.go:141] libmachine: (newest-cni-007396) DBG | created network xml: 
	I0612 21:57:39.646043   86948 main.go:141] libmachine: (newest-cni-007396) DBG | <network>
	I0612 21:57:39.646054   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   <name>mk-newest-cni-007396</name>
	I0612 21:57:39.646066   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   <dns enable='no'/>
	I0612 21:57:39.646074   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   
	I0612 21:57:39.646080   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0612 21:57:39.646086   86948 main.go:141] libmachine: (newest-cni-007396) DBG |     <dhcp>
	I0612 21:57:39.646094   86948 main.go:141] libmachine: (newest-cni-007396) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0612 21:57:39.646102   86948 main.go:141] libmachine: (newest-cni-007396) DBG |     </dhcp>
	I0612 21:57:39.646109   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   </ip>
	I0612 21:57:39.646115   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   
	I0612 21:57:39.646125   86948 main.go:141] libmachine: (newest-cni-007396) DBG | </network>
	I0612 21:57:39.646152   86948 main.go:141] libmachine: (newest-cni-007396) DBG | 
	I0612 21:57:39.652264   86948 main.go:141] libmachine: (newest-cni-007396) DBG | trying to create private KVM network mk-newest-cni-007396 192.168.50.0/24...
	I0612 21:57:39.722112   86948 main.go:141] libmachine: (newest-cni-007396) DBG | private KVM network mk-newest-cni-007396 192.168.50.0/24 created
	I0612 21:57:39.722210   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.722103   86970 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:57:39.722240   86948 main.go:141] libmachine: (newest-cni-007396) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396 ...
	I0612 21:57:39.722309   86948 main.go:141] libmachine: (newest-cni-007396) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 21:57:39.722340   86948 main.go:141] libmachine: (newest-cni-007396) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 21:57:39.949912   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.949748   86970 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa...
	I0612 21:57:40.367958   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:40.367803   86970 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/newest-cni-007396.rawdisk...
	I0612 21:57:40.367993   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Writing magic tar header
	I0612 21:57:40.368005   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Writing SSH key tar header
	I0612 21:57:40.368014   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:40.367917   86970 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396 ...
	I0612 21:57:40.368030   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396
	I0612 21:57:40.368039   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 21:57:40.368052   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396 (perms=drwx------)
	I0612 21:57:40.368066   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 21:57:40.368080   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 21:57:40.368097   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 21:57:40.368106   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 21:57:40.368143   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:57:40.368168   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 21:57:40.368175   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 21:57:40.368184   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 21:57:40.368191   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins
	I0612 21:57:40.368216   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home
	I0612 21:57:40.368230   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Skipping /home - not owner
	I0612 21:57:40.368244   86948 main.go:141] libmachine: (newest-cni-007396) Creating domain...
	I0612 21:57:40.369412   86948 main.go:141] libmachine: (newest-cni-007396) define libvirt domain using xml: 
	I0612 21:57:40.369429   86948 main.go:141] libmachine: (newest-cni-007396) <domain type='kvm'>
	I0612 21:57:40.369436   86948 main.go:141] libmachine: (newest-cni-007396)   <name>newest-cni-007396</name>
	I0612 21:57:40.369441   86948 main.go:141] libmachine: (newest-cni-007396)   <memory unit='MiB'>2200</memory>
	I0612 21:57:40.369447   86948 main.go:141] libmachine: (newest-cni-007396)   <vcpu>2</vcpu>
	I0612 21:57:40.369455   86948 main.go:141] libmachine: (newest-cni-007396)   <features>
	I0612 21:57:40.369463   86948 main.go:141] libmachine: (newest-cni-007396)     <acpi/>
	I0612 21:57:40.369474   86948 main.go:141] libmachine: (newest-cni-007396)     <apic/>
	I0612 21:57:40.369483   86948 main.go:141] libmachine: (newest-cni-007396)     <pae/>
	I0612 21:57:40.369495   86948 main.go:141] libmachine: (newest-cni-007396)     
	I0612 21:57:40.369504   86948 main.go:141] libmachine: (newest-cni-007396)   </features>
	I0612 21:57:40.369520   86948 main.go:141] libmachine: (newest-cni-007396)   <cpu mode='host-passthrough'>
	I0612 21:57:40.369553   86948 main.go:141] libmachine: (newest-cni-007396)   
	I0612 21:57:40.369579   86948 main.go:141] libmachine: (newest-cni-007396)   </cpu>
	I0612 21:57:40.369590   86948 main.go:141] libmachine: (newest-cni-007396)   <os>
	I0612 21:57:40.369597   86948 main.go:141] libmachine: (newest-cni-007396)     <type>hvm</type>
	I0612 21:57:40.369622   86948 main.go:141] libmachine: (newest-cni-007396)     <boot dev='cdrom'/>
	I0612 21:57:40.369631   86948 main.go:141] libmachine: (newest-cni-007396)     <boot dev='hd'/>
	I0612 21:57:40.369636   86948 main.go:141] libmachine: (newest-cni-007396)     <bootmenu enable='no'/>
	I0612 21:57:40.369643   86948 main.go:141] libmachine: (newest-cni-007396)   </os>
	I0612 21:57:40.369650   86948 main.go:141] libmachine: (newest-cni-007396)   <devices>
	I0612 21:57:40.369668   86948 main.go:141] libmachine: (newest-cni-007396)     <disk type='file' device='cdrom'>
	I0612 21:57:40.369685   86948 main.go:141] libmachine: (newest-cni-007396)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/boot2docker.iso'/>
	I0612 21:57:40.369701   86948 main.go:141] libmachine: (newest-cni-007396)       <target dev='hdc' bus='scsi'/>
	I0612 21:57:40.369713   86948 main.go:141] libmachine: (newest-cni-007396)       <readonly/>
	I0612 21:57:40.369719   86948 main.go:141] libmachine: (newest-cni-007396)     </disk>
	I0612 21:57:40.369725   86948 main.go:141] libmachine: (newest-cni-007396)     <disk type='file' device='disk'>
	I0612 21:57:40.369734   86948 main.go:141] libmachine: (newest-cni-007396)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 21:57:40.369766   86948 main.go:141] libmachine: (newest-cni-007396)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/newest-cni-007396.rawdisk'/>
	I0612 21:57:40.369805   86948 main.go:141] libmachine: (newest-cni-007396)       <target dev='hda' bus='virtio'/>
	I0612 21:57:40.369819   86948 main.go:141] libmachine: (newest-cni-007396)     </disk>
	I0612 21:57:40.369832   86948 main.go:141] libmachine: (newest-cni-007396)     <interface type='network'>
	I0612 21:57:40.369846   86948 main.go:141] libmachine: (newest-cni-007396)       <source network='mk-newest-cni-007396'/>
	I0612 21:57:40.369857   86948 main.go:141] libmachine: (newest-cni-007396)       <model type='virtio'/>
	I0612 21:57:40.369868   86948 main.go:141] libmachine: (newest-cni-007396)     </interface>
	I0612 21:57:40.369884   86948 main.go:141] libmachine: (newest-cni-007396)     <interface type='network'>
	I0612 21:57:40.369900   86948 main.go:141] libmachine: (newest-cni-007396)       <source network='default'/>
	I0612 21:57:40.369911   86948 main.go:141] libmachine: (newest-cni-007396)       <model type='virtio'/>
	I0612 21:57:40.369918   86948 main.go:141] libmachine: (newest-cni-007396)     </interface>
	I0612 21:57:40.369927   86948 main.go:141] libmachine: (newest-cni-007396)     <serial type='pty'>
	I0612 21:57:40.369935   86948 main.go:141] libmachine: (newest-cni-007396)       <target port='0'/>
	I0612 21:57:40.369947   86948 main.go:141] libmachine: (newest-cni-007396)     </serial>
	I0612 21:57:40.369954   86948 main.go:141] libmachine: (newest-cni-007396)     <console type='pty'>
	I0612 21:57:40.369967   86948 main.go:141] libmachine: (newest-cni-007396)       <target type='serial' port='0'/>
	I0612 21:57:40.369977   86948 main.go:141] libmachine: (newest-cni-007396)     </console>
	I0612 21:57:40.369986   86948 main.go:141] libmachine: (newest-cni-007396)     <rng model='virtio'>
	I0612 21:57:40.369995   86948 main.go:141] libmachine: (newest-cni-007396)       <backend model='random'>/dev/random</backend>
	I0612 21:57:40.370002   86948 main.go:141] libmachine: (newest-cni-007396)     </rng>
	I0612 21:57:40.370016   86948 main.go:141] libmachine: (newest-cni-007396)     
	I0612 21:57:40.370026   86948 main.go:141] libmachine: (newest-cni-007396)     
	I0612 21:57:40.370036   86948 main.go:141] libmachine: (newest-cni-007396)   </devices>
	I0612 21:57:40.370046   86948 main.go:141] libmachine: (newest-cni-007396) </domain>
	I0612 21:57:40.370060   86948 main.go:141] libmachine: (newest-cni-007396) 
	I0612 21:57:40.374484   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:ac:61:40 in network default
	I0612 21:57:40.375055   86948 main.go:141] libmachine: (newest-cni-007396) Ensuring networks are active...
	I0612 21:57:40.375074   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:40.375755   86948 main.go:141] libmachine: (newest-cni-007396) Ensuring network default is active
	I0612 21:57:40.376055   86948 main.go:141] libmachine: (newest-cni-007396) Ensuring network mk-newest-cni-007396 is active
	I0612 21:57:40.376588   86948 main.go:141] libmachine: (newest-cni-007396) Getting domain xml...
	I0612 21:57:40.377311   86948 main.go:141] libmachine: (newest-cni-007396) Creating domain...
	I0612 21:57:41.646694   86948 main.go:141] libmachine: (newest-cni-007396) Waiting to get IP...
	I0612 21:57:41.647535   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:41.647983   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:41.648009   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:41.647967   86970 retry.go:31] will retry after 232.64418ms: waiting for machine to come up
	I0612 21:57:41.882517   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:41.883132   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:41.883162   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:41.883063   86970 retry.go:31] will retry after 300.678306ms: waiting for machine to come up
	I0612 21:57:42.185385   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:42.185837   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:42.185867   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:42.185788   86970 retry.go:31] will retry after 322.355198ms: waiting for machine to come up
	I0612 21:57:42.509318   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:42.509851   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:42.509874   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:42.509823   86970 retry.go:31] will retry after 383.48604ms: waiting for machine to come up
	I0612 21:57:42.895499   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:42.896051   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:42.896083   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:42.896000   86970 retry.go:31] will retry after 681.668123ms: waiting for machine to come up
	I0612 21:57:43.579089   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:43.579655   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:43.579692   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:43.579608   86970 retry.go:31] will retry after 772.173706ms: waiting for machine to come up
	I0612 21:57:44.353493   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:44.353942   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:44.353965   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:44.353889   86970 retry.go:31] will retry after 1.081187064s: waiting for machine to come up
	I0612 21:57:45.436451   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:45.436949   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:45.436977   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:45.436901   86970 retry.go:31] will retry after 1.312080042s: waiting for machine to come up
	I0612 21:57:46.751288   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:46.751800   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:46.751823   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:46.751758   86970 retry.go:31] will retry after 1.211250846s: waiting for machine to come up
	I0612 21:57:47.964813   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:47.965255   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:47.965280   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:47.965195   86970 retry.go:31] will retry after 1.673381258s: waiting for machine to come up
	I0612 21:57:49.640173   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:49.640641   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:49.640664   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:49.640609   86970 retry.go:31] will retry after 1.995026566s: waiting for machine to come up
	I0612 21:57:51.638102   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:51.638614   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:51.638639   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:51.638561   86970 retry.go:31] will retry after 3.197679013s: waiting for machine to come up
	I0612 21:57:54.837509   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:54.838000   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:54.838028   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:54.837956   86970 retry.go:31] will retry after 3.462181977s: waiting for machine to come up
	I0612 21:57:58.304412   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:58.304897   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:58.304931   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:58.304819   86970 retry.go:31] will retry after 3.755357309s: waiting for machine to come up
	I0612 21:58:02.062774   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.063322   86948 main.go:141] libmachine: (newest-cni-007396) Found IP for machine: 192.168.50.207
	I0612 21:58:02.063351   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has current primary IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.063381   86948 main.go:141] libmachine: (newest-cni-007396) Reserving static IP address...
	I0612 21:58:02.063736   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find host DHCP lease matching {name: "newest-cni-007396", mac: "52:54:00:a5:e1:fb", ip: "192.168.50.207"} in network mk-newest-cni-007396
	I0612 21:58:02.146932   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Getting to WaitForSSH function...
	I0612 21:58:02.146965   86948 main.go:141] libmachine: (newest-cni-007396) Reserved static IP address: 192.168.50.207
	I0612 21:58:02.146979   86948 main.go:141] libmachine: (newest-cni-007396) Waiting for SSH to be available...
	I0612 21:58:02.149790   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.150289   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.150323   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.150483   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Using SSH client type: external
	I0612 21:58:02.150512   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa (-rw-------)
	I0612 21:58:02.150548   86948 main.go:141] libmachine: (newest-cni-007396) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:58:02.150565   86948 main.go:141] libmachine: (newest-cni-007396) DBG | About to run SSH command:
	I0612 21:58:02.150580   86948 main.go:141] libmachine: (newest-cni-007396) DBG | exit 0
	I0612 21:58:02.279618   86948 main.go:141] libmachine: (newest-cni-007396) DBG | SSH cmd err, output: <nil>: 
	I0612 21:58:02.279899   86948 main.go:141] libmachine: (newest-cni-007396) KVM machine creation complete!
	I0612 21:58:02.280217   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetConfigRaw
	I0612 21:58:02.280700   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:02.280886   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:02.281060   86948 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 21:58:02.281077   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetState
	I0612 21:58:02.282541   86948 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 21:58:02.282554   86948 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 21:58:02.282560   86948 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 21:58:02.282566   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.285113   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.285505   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.285535   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.285681   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.285880   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.286029   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.286215   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.286406   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.286581   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.286594   86948 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 21:58:02.394673   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:58:02.394702   86948 main.go:141] libmachine: Detecting the provisioner...
	I0612 21:58:02.394714   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.397514   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.397799   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.397821   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.397989   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.398190   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.398390   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.398545   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.398715   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.398921   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.398932   86948 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 21:58:02.504115   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 21:58:02.504176   86948 main.go:141] libmachine: found compatible host: buildroot
	I0612 21:58:02.504183   86948 main.go:141] libmachine: Provisioning with buildroot...
	I0612 21:58:02.504190   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:58:02.504433   86948 buildroot.go:166] provisioning hostname "newest-cni-007396"
	I0612 21:58:02.504459   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:58:02.504702   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.508127   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.508526   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.508555   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.508732   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.508920   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.509065   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.509177   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.509332   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.509586   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.509607   86948 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-007396 && echo "newest-cni-007396" | sudo tee /etc/hostname
	I0612 21:58:02.630796   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-007396
	
	I0612 21:58:02.630828   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.633959   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.634507   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.634545   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.634710   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.634901   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.635104   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.635310   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.635497   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.635697   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.635723   86948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-007396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-007396/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-007396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:58:02.754971   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:58:02.755003   86948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:58:02.755025   86948 buildroot.go:174] setting up certificates
	I0612 21:58:02.755037   86948 provision.go:84] configureAuth start
	I0612 21:58:02.755049   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:58:02.755367   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetIP
	I0612 21:58:02.757918   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.758342   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.758374   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.758471   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.761085   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.761409   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.761437   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.761582   86948 provision.go:143] copyHostCerts
	I0612 21:58:02.761670   86948 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:58:02.761680   86948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:58:02.761744   86948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:58:02.761842   86948 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:58:02.761850   86948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:58:02.761872   86948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:58:02.761932   86948 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:58:02.761939   86948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:58:02.761959   86948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:58:02.762037   86948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.newest-cni-007396 san=[127.0.0.1 192.168.50.207 localhost minikube newest-cni-007396]
	I0612 21:58:02.983584   86948 provision.go:177] copyRemoteCerts
	I0612 21:58:02.983643   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:58:02.983665   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.986420   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.986728   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.986767   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.986935   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.987149   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.987356   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.987507   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.069906   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0612 21:58:03.095863   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:58:03.124797   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:58:03.149919   86948 provision.go:87] duration metric: took 394.869081ms to configureAuth
	I0612 21:58:03.149945   86948 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:58:03.150170   86948 config.go:182] Loaded profile config "newest-cni-007396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:58:03.150272   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.153322   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.153699   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.153737   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.153974   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.154243   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.154441   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.154623   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.154845   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:03.154995   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:03.155009   86948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:58:03.430020   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:58:03.430053   86948 main.go:141] libmachine: Checking connection to Docker...
	I0612 21:58:03.430064   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetURL
	I0612 21:58:03.431420   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Using libvirt version 6000000
	I0612 21:58:03.433660   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.434051   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.434083   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.434223   86948 main.go:141] libmachine: Docker is up and running!
	I0612 21:58:03.434238   86948 main.go:141] libmachine: Reticulating splines...
	I0612 21:58:03.434247   86948 client.go:171] duration metric: took 23.793089795s to LocalClient.Create
	I0612 21:58:03.434273   86948 start.go:167] duration metric: took 23.793159772s to libmachine.API.Create "newest-cni-007396"
	I0612 21:58:03.434286   86948 start.go:293] postStartSetup for "newest-cni-007396" (driver="kvm2")
	I0612 21:58:03.434298   86948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:58:03.434317   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.434571   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:58:03.434594   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.436668   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.436966   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.436998   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.437209   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.437409   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.437582   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.437706   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.526365   86948 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:58:03.530621   86948 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:58:03.530646   86948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:58:03.530713   86948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:58:03.531006   86948 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:58:03.531139   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:58:03.541890   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:58:03.567793   86948 start.go:296] duration metric: took 133.495039ms for postStartSetup
	I0612 21:58:03.567838   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetConfigRaw
	I0612 21:58:03.568519   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetIP
	I0612 21:58:03.571244   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.571648   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.571675   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.571966   86948 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/config.json ...
	I0612 21:58:03.572180   86948 start.go:128] duration metric: took 23.948557924s to createHost
	I0612 21:58:03.572207   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.574448   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.574799   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.574824   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.575004   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.575225   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.575414   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.575577   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.575750   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:03.575947   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:03.575960   86948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:58:03.680255   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718229483.653291457
	
	I0612 21:58:03.680279   86948 fix.go:216] guest clock: 1718229483.653291457
	I0612 21:58:03.680288   86948 fix.go:229] Guest: 2024-06-12 21:58:03.653291457 +0000 UTC Remote: 2024-06-12 21:58:03.572192588 +0000 UTC m=+24.058769808 (delta=81.098869ms)
	I0612 21:58:03.680348   86948 fix.go:200] guest clock delta is within tolerance: 81.098869ms
	I0612 21:58:03.680359   86948 start.go:83] releasing machines lock for "newest-cni-007396", held for 24.056803081s
	I0612 21:58:03.680388   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.680651   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetIP
	I0612 21:58:03.683199   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.683495   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.683520   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.683694   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.684217   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.684420   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.684511   86948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:58:03.684561   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.684619   86948 ssh_runner.go:195] Run: cat /version.json
	I0612 21:58:03.684642   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.687373   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.687651   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.687709   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.687765   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.687870   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.688095   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.688146   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.688172   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.688279   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.688389   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.688453   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.688521   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.688685   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.688838   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.764995   86948 ssh_runner.go:195] Run: systemctl --version
	I0612 21:58:03.787664   86948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:58:03.948904   86948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:58:03.955287   86948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:58:03.955368   86948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:58:03.973537   86948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:58:03.973563   86948 start.go:494] detecting cgroup driver to use...
	I0612 21:58:03.973630   86948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:58:03.991002   86948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:58:04.004854   86948 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:58:04.004913   86948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:58:04.019058   86948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:58:04.032658   86948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:58:04.158544   86948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:58:04.315596   86948 docker.go:233] disabling docker service ...
	I0612 21:58:04.315682   86948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:58:04.333215   86948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:58:04.350500   86948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:58:04.497343   86948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:58:04.640728   86948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:58:04.668553   86948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:58:04.691878   86948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:58:04.691939   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.706849   86948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:58:04.706901   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.717640   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.729069   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.741733   86948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:58:04.754037   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.765874   86948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.785919   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.797651   86948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:58:04.807726   86948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:58:04.807786   86948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:58:04.821239   86948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:58:04.835092   86948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:58:04.982309   86948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:58:05.139997   86948 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:58:05.140070   86948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:58:05.146463   86948 start.go:562] Will wait 60s for crictl version
	I0612 21:58:05.146517   86948 ssh_runner.go:195] Run: which crictl
	I0612 21:58:05.150978   86948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:58:05.200770   86948 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:58:05.200843   86948 ssh_runner.go:195] Run: crio --version
	I0612 21:58:05.233305   86948 ssh_runner.go:195] Run: crio --version
	I0612 21:58:05.271552   86948 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:58:05.272867   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetIP
	I0612 21:58:05.275387   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:05.275787   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:05.275820   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:05.275981   86948 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0612 21:58:05.280392   86948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:58:05.297132   86948 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0612 21:58:05.298554   86948 kubeadm.go:877] updating cluster {Name:newest-cni-007396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:58:05.298678   86948 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:58:05.298737   86948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:58:05.337708   86948 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:58:05.337763   86948 ssh_runner.go:195] Run: which lz4
	I0612 21:58:05.341928   86948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:58:05.346383   86948 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:58:05.346413   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:58:06.865952   86948 crio.go:462] duration metric: took 1.524051425s to copy over tarball
	I0612 21:58:06.866020   86948 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:58:09.120553   86948 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.254511001s)
	I0612 21:58:09.120579   86948 crio.go:469] duration metric: took 2.254598258s to extract the tarball
	I0612 21:58:09.120589   86948 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:58:09.160964   86948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:58:09.211479   86948 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:58:09.211501   86948 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:58:09.211508   86948 kubeadm.go:928] updating node { 192.168.50.207 8443 v1.30.1 crio true true} ...
	I0612 21:58:09.211628   86948 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-007396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:58:09.211712   86948 ssh_runner.go:195] Run: crio config
	I0612 21:58:09.264731   86948 cni.go:84] Creating CNI manager for ""
	I0612 21:58:09.264750   86948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:58:09.264757   86948 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0612 21:58:09.264778   86948 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.207 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-007396 NodeName:newest-cni-007396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:58:09.264915   86948 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-007396"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:58:09.264972   86948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:58:09.275107   86948 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:58:09.275189   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:58:09.284547   86948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0612 21:58:09.301703   86948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:58:09.318529   86948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0612 21:58:09.335761   86948 ssh_runner.go:195] Run: grep 192.168.50.207	control-plane.minikube.internal$ /etc/hosts
	I0612 21:58:09.340128   86948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:58:09.354191   86948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:58:09.489939   86948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:58:09.508379   86948 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396 for IP: 192.168.50.207
	I0612 21:58:09.508400   86948 certs.go:194] generating shared ca certs ...
	I0612 21:58:09.508419   86948 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.508563   86948 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:58:09.508626   86948 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:58:09.508641   86948 certs.go:256] generating profile certs ...
	I0612 21:58:09.508708   86948 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.key
	I0612 21:58:09.508729   86948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.crt with IP's: []
	I0612 21:58:09.646440   86948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.crt ...
	I0612 21:58:09.646468   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.crt: {Name:mkc8d2681965bb16e4abe8bad19c8322752630f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.646660   86948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.key ...
	I0612 21:58:09.646675   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.key: {Name:mkfea61ee91e6b012e734ab300bc57a95ec6dee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.646759   86948 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key.7c9e52d7
	I0612 21:58:09.646774   86948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt.7c9e52d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.207]
	I0612 21:58:09.781803   86948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt.7c9e52d7 ...
	I0612 21:58:09.781837   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt.7c9e52d7: {Name:mkf4dc4131392447b68af9b8a04ac3d6e5d9d16f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.782056   86948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key.7c9e52d7 ...
	I0612 21:58:09.782090   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key.7c9e52d7: {Name:mk98e37ee3f5da6e372801d2604565c36364469a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.782208   86948 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt.7c9e52d7 -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt
	I0612 21:58:09.782322   86948 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key.7c9e52d7 -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key
	I0612 21:58:09.782385   86948 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.key
	I0612 21:58:09.782411   86948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.crt with IP's: []
	I0612 21:58:09.920251   86948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.crt ...
	I0612 21:58:09.920276   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.crt: {Name:mke1aa3213902e5b9f72aa2b601c889050adacc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.920445   86948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.key ...
	I0612 21:58:09.920461   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.key: {Name:mk9212b5a154365129543410a8c5012b30573116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.920673   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:58:09.920708   86948 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:58:09.920718   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:58:09.920741   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:58:09.920761   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:58:09.920784   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:58:09.920818   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:58:09.921501   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:58:09.951234   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:58:09.976768   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:58:10.000677   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:58:10.027609   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 21:58:10.053069   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 21:58:10.080270   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:58:10.106927   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:58:10.132480   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:58:10.159497   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:58:10.189262   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:58:10.214862   86948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:58:10.233571   86948 ssh_runner.go:195] Run: openssl version
	I0612 21:58:10.239298   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:58:10.250683   86948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:58:10.255295   86948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:58:10.255357   86948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:58:10.261255   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:58:10.272803   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:58:10.289408   86948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:58:10.294212   86948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:58:10.294267   86948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:58:10.302667   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:58:10.321450   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:58:10.335517   86948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:58:10.341419   86948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:58:10.341488   86948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:58:10.350555   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:58:10.362759   86948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:58:10.369029   86948 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 21:58:10.369088   86948 kubeadm.go:391] StartCluster: {Name:newest-cni-007396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:58:10.369171   86948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:58:10.369229   86948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:58:10.406297   86948 cri.go:89] found id: ""
	I0612 21:58:10.406376   86948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 21:58:10.416929   86948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:58:10.426929   86948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:58:10.436717   86948 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:58:10.436743   86948 kubeadm.go:156] found existing configuration files:
	
	I0612 21:58:10.436792   86948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:58:10.446501   86948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:58:10.446560   86948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:58:10.456054   86948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:58:10.465116   86948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:58:10.465165   86948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:58:10.474674   86948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:58:10.484274   86948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:58:10.484315   86948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:58:10.494358   86948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:58:10.503951   86948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:58:10.503999   86948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:58:10.513541   86948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:58:10.621012   86948 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:58:10.621129   86948 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:58:10.749156   86948 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:58:10.749308   86948 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:58:10.749442   86948 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:58:10.987184   86948 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:58:11.117107   86948 out.go:204]   - Generating certificates and keys ...
	I0612 21:58:11.117241   86948 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:58:11.117335   86948 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:58:11.117426   86948 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 21:58:11.332874   86948 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 21:58:11.794187   86948 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 21:58:11.915133   86948 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 21:58:12.182141   86948 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 21:58:12.182380   86948 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-007396] and IPs [192.168.50.207 127.0.0.1 ::1]
	I0612 21:58:12.590048   86948 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 21:58:12.590278   86948 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-007396] and IPs [192.168.50.207 127.0.0.1 ::1]
	I0612 21:58:12.689980   86948 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 21:58:12.865854   86948 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 21:58:12.947581   86948 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 21:58:12.947883   86948 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:58:13.141280   86948 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:58:13.330698   86948 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:58:13.405686   86948 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:58:13.489125   86948 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:58:13.617590   86948 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:58:13.618344   86948 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:58:13.622803   86948 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:58:13.624841   86948 out.go:204]   - Booting up control plane ...
	I0612 21:58:13.624928   86948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:58:13.625029   86948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:58:13.625461   86948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:58:13.641310   86948 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:58:13.643459   86948 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:58:13.643572   86948 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:58:13.774077   86948 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:58:13.774205   86948 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:58:14.775671   86948 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002167299s
	I0612 21:58:14.775770   86948 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:58:20.274766   86948 kubeadm.go:309] [api-check] The API server is healthy after 5.501170917s
	I0612 21:58:20.293180   86948 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:58:20.313804   86948 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:58:20.358713   86948 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:58:20.358977   86948 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-007396 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:58:20.376957   86948 kubeadm.go:309] [bootstrap-token] Using token: ap57h1.bcf4gjm029dmbwa9
	I0612 21:58:20.378627   86948 out.go:204]   - Configuring RBAC rules ...
	I0612 21:58:20.378811   86948 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:58:20.389584   86948 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:58:20.402127   86948 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:58:20.414966   86948 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:58:20.424366   86948 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:58:20.434058   86948 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:58:20.681506   86948 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:58:21.123454   86948 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:58:21.681294   86948 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:58:21.681351   86948 kubeadm.go:309] 
	I0612 21:58:21.681444   86948 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:58:21.681456   86948 kubeadm.go:309] 
	I0612 21:58:21.681563   86948 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:58:21.681574   86948 kubeadm.go:309] 
	I0612 21:58:21.681627   86948 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:58:21.681716   86948 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:58:21.681783   86948 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:58:21.681793   86948 kubeadm.go:309] 
	I0612 21:58:21.681874   86948 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:58:21.681887   86948 kubeadm.go:309] 
	I0612 21:58:21.681943   86948 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:58:21.681951   86948 kubeadm.go:309] 
	I0612 21:58:21.682016   86948 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:58:21.682119   86948 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:58:21.682234   86948 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:58:21.682246   86948 kubeadm.go:309] 
	I0612 21:58:21.682380   86948 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:58:21.682499   86948 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:58:21.682513   86948 kubeadm.go:309] 
	I0612 21:58:21.682639   86948 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ap57h1.bcf4gjm029dmbwa9 \
	I0612 21:58:21.682793   86948 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:58:21.682825   86948 kubeadm.go:309] 	--control-plane 
	I0612 21:58:21.682835   86948 kubeadm.go:309] 
	I0612 21:58:21.682950   86948 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:58:21.682962   86948 kubeadm.go:309] 
	I0612 21:58:21.683106   86948 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ap57h1.bcf4gjm029dmbwa9 \
	I0612 21:58:21.683259   86948 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:58:21.683429   86948 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:58:21.683458   86948 cni.go:84] Creating CNI manager for ""
	I0612 21:58:21.683472   86948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:58:21.685477   86948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:58:21.686802   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:58:21.700191   86948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:58:21.722111   86948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:58:21.722176   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:21.722202   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-007396 minikube.k8s.io/updated_at=2024_06_12T21_58_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=newest-cni-007396 minikube.k8s.io/primary=true
	I0612 21:58:21.953387   86948 ops.go:34] apiserver oom_adj: -16
	I0612 21:58:21.953438   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:22.454537   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:22.954181   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:23.453931   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:23.953994   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:24.454407   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:24.954182   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:25.454300   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:25.953518   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:26.453740   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:26.953940   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:27.454030   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:27.954217   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:28.454157   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:28.953544   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:29.453862   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:29.953973   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:30.453562   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:30.953669   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:31.453454   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:31.953594   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:32.454081   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:32.953549   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:33.454345   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:33.954284   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:34.454408   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:34.547344   86948 kubeadm.go:1107] duration metric: took 12.825231402s to wait for elevateKubeSystemPrivileges
	W0612 21:58:34.547385   86948 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:58:34.547396   86948 kubeadm.go:393] duration metric: took 24.178318758s to StartCluster
	I0612 21:58:34.547414   86948 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:34.547495   86948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:58:34.549447   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:34.549652   86948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0612 21:58:34.549667   86948 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:58:34.551622   86948 out.go:177] * Verifying Kubernetes components...
	I0612 21:58:34.549873   86948 config.go:182] Loaded profile config "newest-cni-007396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:58:34.549749   86948 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:58:34.554137   86948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:58:34.552957   86948 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-007396"
	I0612 21:58:34.554235   86948 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-007396"
	I0612 21:58:34.552971   86948 addons.go:69] Setting default-storageclass=true in profile "newest-cni-007396"
	I0612 21:58:34.554273   86948 host.go:66] Checking if "newest-cni-007396" exists ...
	I0612 21:58:34.554293   86948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-007396"
	I0612 21:58:34.554631   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:58:34.554631   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:58:34.554656   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:58:34.554668   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:58:34.570456   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
	I0612 21:58:34.570653   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
	I0612 21:58:34.570992   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:58:34.571115   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:58:34.571524   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:58:34.571547   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:58:34.571687   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:58:34.571706   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:58:34.571921   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:58:34.572087   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:58:34.572427   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:58:34.572455   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:58:34.572733   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetState
	I0612 21:58:34.577156   86948 addons.go:234] Setting addon default-storageclass=true in "newest-cni-007396"
	I0612 21:58:34.577201   86948 host.go:66] Checking if "newest-cni-007396" exists ...
	I0612 21:58:34.577545   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:58:34.577573   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:58:34.589306   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36275
	I0612 21:58:34.589759   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:58:34.590343   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:58:34.590370   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:58:34.590714   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:58:34.590945   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetState
	I0612 21:58:34.592939   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:34.595099   86948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:58:34.593962   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0612 21:58:34.595969   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:58:34.596615   86948 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:58:34.596631   86948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:58:34.596645   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:34.597442   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:58:34.597473   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:58:34.597840   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:58:34.598500   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:58:34.598543   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:58:34.600313   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:34.600744   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:34.600770   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:34.601082   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:34.601281   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:34.601433   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:34.601586   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:34.613275   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I0612 21:58:34.613677   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:58:34.614115   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:58:34.614133   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:58:34.614422   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:58:34.614591   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetState
	I0612 21:58:34.616343   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:34.616566   86948 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:58:34.616582   86948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:58:34.616600   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:34.619820   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:34.620104   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:34.620124   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:34.620268   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:34.620388   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:34.620490   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:34.620589   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:34.849923   86948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:58:34.849967   86948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0612 21:58:34.966129   86948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:58:35.029466   86948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:58:35.369654   86948 start.go:946] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0612 21:58:35.371980   86948 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:58:35.372059   86948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:58:35.695766   86948 main.go:141] libmachine: Making call to close driver server
	I0612 21:58:35.695798   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Close
	I0612 21:58:35.695855   86948 api_server.go:72] duration metric: took 1.146158724s to wait for apiserver process to appear ...
	I0612 21:58:35.695873   86948 main.go:141] libmachine: Making call to close driver server
	I0612 21:58:35.695887   86948 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:58:35.695899   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Close
	I0612 21:58:35.695911   86948 api_server.go:253] Checking apiserver healthz at https://192.168.50.207:8443/healthz ...
	I0612 21:58:35.696286   86948 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:58:35.696298   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Closing plugin on server side
	I0612 21:58:35.696300   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Closing plugin on server side
	I0612 21:58:35.696305   86948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:58:35.696354   86948 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:58:35.696318   86948 main.go:141] libmachine: Making call to close driver server
	I0612 21:58:35.696395   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Close
	I0612 21:58:35.696378   86948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:58:35.696507   86948 main.go:141] libmachine: Making call to close driver server
	I0612 21:58:35.696516   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Close
	I0612 21:58:35.696787   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Closing plugin on server side
	I0612 21:58:35.696803   86948 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:58:35.696815   86948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:58:35.696827   86948 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:58:35.696834   86948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:58:35.696833   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Closing plugin on server side
	I0612 21:58:35.707110   86948 api_server.go:279] https://192.168.50.207:8443/healthz returned 200:
	ok
	I0612 21:58:35.708912   86948 api_server.go:141] control plane version: v1.30.1
	I0612 21:58:35.708937   86948 api_server.go:131] duration metric: took 13.041257ms to wait for apiserver health ...
	I0612 21:58:35.708947   86948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:58:35.732416   86948 system_pods.go:59] 8 kube-system pods found
	I0612 21:58:35.732462   86948 system_pods.go:61] "coredns-7db6d8ff4d-7996b" [02830689-7662-464f-8a55-e553a984dc5b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:58:35.732474   86948 system_pods.go:61] "coredns-7db6d8ff4d-l5xd5" [e9382fd3-c07c-4eab-8813-a1fb72cf297b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:58:35.732481   86948 system_pods.go:61] "etcd-newest-cni-007396" [bd4b8459-da7a-4439-9880-5bdaadf89146] Running
	I0612 21:58:35.732488   86948 system_pods.go:61] "kube-apiserver-newest-cni-007396" [39eddcf8-9a17-44d6-a141-bdb000607a82] Running
	I0612 21:58:35.732495   86948 system_pods.go:61] "kube-controller-manager-newest-cni-007396" [e6f9fb22-bdda-44cd-bc5f-c51bb7addde0] Running
	I0612 21:58:35.732502   86948 system_pods.go:61] "kube-proxy-j972w" [fb2fd5fd-9c3c-4d01-9ab3-259b5fa602fe] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 21:58:35.732508   86948 system_pods.go:61] "kube-scheduler-newest-cni-007396" [60605366-6b4d-4303-b8a7-c3c29a1440a1] Running
	I0612 21:58:35.732514   86948 system_pods.go:61] "storage-provisioner" [b38936ce-e9eb-4c2f-b92d-8e8bdc8503c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:58:35.732522   86948 system_pods.go:74] duration metric: took 23.566681ms to wait for pod list to return data ...
	I0612 21:58:35.732530   86948 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:58:35.733267   86948 main.go:141] libmachine: Making call to close driver server
	I0612 21:58:35.733296   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Close
	I0612 21:58:35.733634   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Closing plugin on server side
	I0612 21:58:35.733683   86948 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:58:35.733694   86948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:58:35.735779   86948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0612 21:58:35.737305   86948 addons.go:510] duration metric: took 1.187553992s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0612 21:58:35.736173   86948 default_sa.go:45] found service account: "default"
	I0612 21:58:35.737343   86948 default_sa.go:55] duration metric: took 4.806416ms for default service account to be created ...
	I0612 21:58:35.737351   86948 kubeadm.go:576] duration metric: took 1.187662747s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0612 21:58:35.737366   86948 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:58:35.741072   86948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:58:35.741095   86948 node_conditions.go:123] node cpu capacity is 2
	I0612 21:58:35.741105   86948 node_conditions.go:105] duration metric: took 3.73469ms to run NodePressure ...
	I0612 21:58:35.741117   86948 start.go:240] waiting for startup goroutines ...
	I0612 21:58:35.874994   86948 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-007396" context rescaled to 1 replicas
	I0612 21:58:35.875033   86948 start.go:245] waiting for cluster config update ...
	I0612 21:58:35.875045   86948 start.go:254] writing updated cluster config ...
	I0612 21:58:35.875308   86948 ssh_runner.go:195] Run: rm -f paused
	I0612 21:58:35.938166   86948 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:58:35.939681   86948 out.go:177] * Done! kubectl is now configured to use "newest-cni-007396" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.708577804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229524708556943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7d8b626-4b61-4948-a2c6-97526339bccf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.709159584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22df1706-220f-4669-b8a8-c15c4ea5db1c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.709232695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22df1706-220f-4669-b8a8-c15c4ea5db1c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.709405435Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228309199167879,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c48508b30ca15f6432a84141dd0b289e83aa9987e92fc3f9545889492605b8,PodSandboxId:5586f183312b241e003e9f7240dd5a617efdb6a93ac13d42d3956a4274f4b20f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718228289028635935,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d9ff0c0-b2e4-4535-b3e5-3cd361febf51,},Annotations:map[string]string{io.kubernetes.container.hash: 629593af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266,PodSandboxId:fc1a2a9794167dad660926e30bd665fa3f91e43e219af59cb20c26bd5ad50f52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228286137199473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cllsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e26b02-5b11-490e-a1b9-0f12c5ba3830,},Annotations:map[string]string{io.kubernetes.container.hash: c6223842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd,PodSandboxId:298152ff9d202bf8c1ded25c6afd2cb835cb421a74775d6f68e79b86790270c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718228278560675981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lrgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f9342e-2
677-44be-8e22-2a8f45feeb57,},Annotations:map[string]string{io.kubernetes.container.hash: 2db9a195,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718228278389385625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b
-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1,PodSandboxId:d24ba04db930e91176979c74dc3dd4d42613be658694683f9b1940988093f274,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228273661486870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b18924acfd4d72129dec681761dc7e0d,},Annotations:map[
string]string{io.kubernetes.container.hash: 547b9474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f,PodSandboxId:d671a1828f6193b249faf9a4b6a8e3003ecfb8a2730173bf2597aa8131f9c0f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228273745792333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec9370d627717114473c25d049fcefb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249,PodSandboxId:ab600e8cd42e1d241ed0afd1bbddb5a35619bcbc31cdc206def77155a5713dc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228273626794447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb301e61c8490e956bfefe1ed20670f5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 5e727e58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031,PodSandboxId:fdb5a19c0f4892ccc5be280826a890dadf1554e5e56ad554e138a6bd09a3f163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228273633333558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beacfc2e631a20f6822e78f2107d4e
bb,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22df1706-220f-4669-b8a8-c15c4ea5db1c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.750576305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4722d51f-2406-42ce-ab24-64d2eddcd66e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.750650233Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4722d51f-2406-42ce-ab24-64d2eddcd66e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.751661932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d777605-38ce-4fe0-90a1-7c9df9e396ff name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.752869336Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229524752765181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d777605-38ce-4fe0-90a1-7c9df9e396ff name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.756208359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8e93248-fba8-4fdb-932e-a6fa035cdb14 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.756373220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8e93248-fba8-4fdb-932e-a6fa035cdb14 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.757228270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228309199167879,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c48508b30ca15f6432a84141dd0b289e83aa9987e92fc3f9545889492605b8,PodSandboxId:5586f183312b241e003e9f7240dd5a617efdb6a93ac13d42d3956a4274f4b20f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718228289028635935,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d9ff0c0-b2e4-4535-b3e5-3cd361febf51,},Annotations:map[string]string{io.kubernetes.container.hash: 629593af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266,PodSandboxId:fc1a2a9794167dad660926e30bd665fa3f91e43e219af59cb20c26bd5ad50f52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228286137199473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cllsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e26b02-5b11-490e-a1b9-0f12c5ba3830,},Annotations:map[string]string{io.kubernetes.container.hash: c6223842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd,PodSandboxId:298152ff9d202bf8c1ded25c6afd2cb835cb421a74775d6f68e79b86790270c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718228278560675981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lrgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f9342e-2
677-44be-8e22-2a8f45feeb57,},Annotations:map[string]string{io.kubernetes.container.hash: 2db9a195,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718228278389385625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b
-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1,PodSandboxId:d24ba04db930e91176979c74dc3dd4d42613be658694683f9b1940988093f274,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228273661486870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b18924acfd4d72129dec681761dc7e0d,},Annotations:map[
string]string{io.kubernetes.container.hash: 547b9474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f,PodSandboxId:d671a1828f6193b249faf9a4b6a8e3003ecfb8a2730173bf2597aa8131f9c0f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228273745792333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec9370d627717114473c25d049fcefb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249,PodSandboxId:ab600e8cd42e1d241ed0afd1bbddb5a35619bcbc31cdc206def77155a5713dc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228273626794447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb301e61c8490e956bfefe1ed20670f5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 5e727e58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031,PodSandboxId:fdb5a19c0f4892ccc5be280826a890dadf1554e5e56ad554e138a6bd09a3f163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228273633333558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beacfc2e631a20f6822e78f2107d4e
bb,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8e93248-fba8-4fdb-932e-a6fa035cdb14 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.800750205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae2d2dce-405f-4579-b359-6a5c3933e87a name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.800843143Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae2d2dce-405f-4579-b359-6a5c3933e87a name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.802199495Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ff8a5e2-f1ae-44ac-b4c4-af78f5f62537 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.802616452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229524802592025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ff8a5e2-f1ae-44ac-b4c4-af78f5f62537 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.803599123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=234381e7-68b7-4c43-8846-d29e499958a2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.803703975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=234381e7-68b7-4c43-8846-d29e499958a2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.804170786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228309199167879,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c48508b30ca15f6432a84141dd0b289e83aa9987e92fc3f9545889492605b8,PodSandboxId:5586f183312b241e003e9f7240dd5a617efdb6a93ac13d42d3956a4274f4b20f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718228289028635935,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d9ff0c0-b2e4-4535-b3e5-3cd361febf51,},Annotations:map[string]string{io.kubernetes.container.hash: 629593af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266,PodSandboxId:fc1a2a9794167dad660926e30bd665fa3f91e43e219af59cb20c26bd5ad50f52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228286137199473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cllsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e26b02-5b11-490e-a1b9-0f12c5ba3830,},Annotations:map[string]string{io.kubernetes.container.hash: c6223842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd,PodSandboxId:298152ff9d202bf8c1ded25c6afd2cb835cb421a74775d6f68e79b86790270c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718228278560675981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lrgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f9342e-2
677-44be-8e22-2a8f45feeb57,},Annotations:map[string]string{io.kubernetes.container.hash: 2db9a195,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718228278389385625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b
-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1,PodSandboxId:d24ba04db930e91176979c74dc3dd4d42613be658694683f9b1940988093f274,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228273661486870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b18924acfd4d72129dec681761dc7e0d,},Annotations:map[
string]string{io.kubernetes.container.hash: 547b9474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f,PodSandboxId:d671a1828f6193b249faf9a4b6a8e3003ecfb8a2730173bf2597aa8131f9c0f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228273745792333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec9370d627717114473c25d049fcefb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249,PodSandboxId:ab600e8cd42e1d241ed0afd1bbddb5a35619bcbc31cdc206def77155a5713dc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228273626794447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb301e61c8490e956bfefe1ed20670f5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 5e727e58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031,PodSandboxId:fdb5a19c0f4892ccc5be280826a890dadf1554e5e56ad554e138a6bd09a3f163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228273633333558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beacfc2e631a20f6822e78f2107d4e
bb,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=234381e7-68b7-4c43-8846-d29e499958a2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.846220138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28d8dcda-9899-41a3-a445-36a2c6a8e8ab name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.846301997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28d8dcda-9899-41a3-a445-36a2c6a8e8ab name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.847645371Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39b3b2f0-e070-4eed-a116-bf3f59828cab name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.848368937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229524848344325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39b3b2f0-e070-4eed-a116-bf3f59828cab name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.848948100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3fd68f7-93c0-4a51-beb5-0ca1b14439f1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.849023252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3fd68f7-93c0-4a51-beb5-0ca1b14439f1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:44 default-k8s-diff-port-376087 crio[733]: time="2024-06-12 21:58:44.849264977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228309199167879,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c48508b30ca15f6432a84141dd0b289e83aa9987e92fc3f9545889492605b8,PodSandboxId:5586f183312b241e003e9f7240dd5a617efdb6a93ac13d42d3956a4274f4b20f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718228289028635935,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d9ff0c0-b2e4-4535-b3e5-3cd361febf51,},Annotations:map[string]string{io.kubernetes.container.hash: 629593af,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266,PodSandboxId:fc1a2a9794167dad660926e30bd665fa3f91e43e219af59cb20c26bd5ad50f52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228286137199473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cllsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e26b02-5b11-490e-a1b9-0f12c5ba3830,},Annotations:map[string]string{io.kubernetes.container.hash: c6223842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd,PodSandboxId:298152ff9d202bf8c1ded25c6afd2cb835cb421a74775d6f68e79b86790270c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718228278560675981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lrgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98f9342e-2
677-44be-8e22-2a8f45feeb57,},Annotations:map[string]string{io.kubernetes.container.hash: 2db9a195,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70,PodSandboxId:c2c1a3fc0fb255a02209c584d528ccd2c57debb6d0179d3a1a2b1f4668b9177b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718228278389385625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52007a01-3640-4f32-8a4b
-94e6a2e849b0,},Annotations:map[string]string{io.kubernetes.container.hash: f3c9e7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1,PodSandboxId:d24ba04db930e91176979c74dc3dd4d42613be658694683f9b1940988093f274,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228273661486870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b18924acfd4d72129dec681761dc7e0d,},Annotations:map[
string]string{io.kubernetes.container.hash: 547b9474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f,PodSandboxId:d671a1828f6193b249faf9a4b6a8e3003ecfb8a2730173bf2597aa8131f9c0f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228273745792333,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ec9370d627717114473c25d049fcefb,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249,PodSandboxId:ab600e8cd42e1d241ed0afd1bbddb5a35619bcbc31cdc206def77155a5713dc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228273626794447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb301e61c8490e956bfefe1ed20670f5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 5e727e58,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031,PodSandboxId:fdb5a19c0f4892ccc5be280826a890dadf1554e5e56ad554e138a6bd09a3f163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228273633333558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-376087,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beacfc2e631a20f6822e78f2107d4e
bb,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3fd68f7-93c0-4a51-beb5-0ca1b14439f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2ec17a45953ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   c2c1a3fc0fb25       storage-provisioner
	c1c48508b30ca       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   5586f183312b2       busybox
	9247a0b60b235       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   fc1a2a9794167       coredns-7db6d8ff4d-cllsk
	976fbe2261bae       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      20 minutes ago      Running             kube-proxy                1                   298152ff9d202       kube-proxy-8lrgv
	58692ec525480       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   c2c1a3fc0fb25       storage-provisioner
	74488395e0d90       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      20 minutes ago      Running             kube-scheduler            1                   d671a1828f619       kube-scheduler-default-k8s-diff-port-376087
	d482ceea3aaf0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   d24ba04db930e       etcd-default-k8s-diff-port-376087
	73a7a9216e1bd       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      20 minutes ago      Running             kube-controller-manager   1                   fdb5a19c0f489       kube-controller-manager-default-k8s-diff-port-376087
	5a2481a728ef8       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      20 minutes ago      Running             kube-apiserver            1                   ab600e8cd42e1       kube-apiserver-default-k8s-diff-port-376087
	
	
	==> coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56775 - 63860 "HINFO IN 801067738441133078.377083572015025222. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.022540843s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-376087
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-376087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=default-k8s-diff-port-376087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T21_29_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:29:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-376087
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:58:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:53:47 +0000   Wed, 12 Jun 2024 21:29:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:53:47 +0000   Wed, 12 Jun 2024 21:29:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:53:47 +0000   Wed, 12 Jun 2024 21:29:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:53:47 +0000   Wed, 12 Jun 2024 21:38:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.80
	  Hostname:    default-k8s-diff-port-376087
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fd9d83f072143639931caba4728e6dc
	  System UUID:                1fd9d83f-0721-4363-9931-caba4728e6dc
	  Boot ID:                    ea378891-f3db-4d1d-84fa-ecfd5d125b38
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-cllsk                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-376087                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-376087             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-376087    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-8lrgv                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-376087             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-xj4xk                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-376087 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-376087 event: Registered Node default-k8s-diff-port-376087 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-376087 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-376087 event: Registered Node default-k8s-diff-port-376087 in Controller
	
	
	==> dmesg <==
	[Jun12 21:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051535] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040107] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.514924] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.486366] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.617121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.688679] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.061892] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066974] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.200501] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.123783] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.299113] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +4.492551] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.059931] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.926466] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +5.583934] kauditd_printk_skb: 97 callbacks suppressed
	[Jun12 21:38] systemd-fstab-generator[1548]: Ignoring "noauto" option for root device
	[  +3.745304] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.061558] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] <==
	{"level":"info","ts":"2024-06-12T21:38:32.130257Z","caller":"traceutil/trace.go:171","msg":"trace[786363981] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-xj4xk.17d85f858e9fbe3f; range_end:; response_count:1; response_revision:616; }","duration":"781.784273ms","start":"2024-06-12T21:38:31.348465Z","end":"2024-06-12T21:38:32.130249Z","steps":["trace[786363981] 'agreement among raft nodes before linearized reading'  (duration: 781.687197ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T21:38:32.130276Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T21:38:31.348452Z","time spent":"781.819641ms","remote":"127.0.0.1:34034","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":827,"request content":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-xj4xk.17d85f858e9fbe3f\" "}
	{"level":"warn","ts":"2024-06-12T21:38:32.130395Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"633.149372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk\" ","response":"range_response_count:1 size:4291"}
	{"level":"info","ts":"2024-06-12T21:38:32.130445Z","caller":"traceutil/trace.go:171","msg":"trace[118155624] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk; range_end:; response_count:1; response_revision:616; }","duration":"633.21802ms","start":"2024-06-12T21:38:31.497219Z","end":"2024-06-12T21:38:32.130437Z","steps":["trace[118155624] 'agreement among raft nodes before linearized reading'  (duration: 633.148542ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T21:38:32.13047Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T21:38:31.497206Z","time spent":"633.257969ms","remote":"127.0.0.1:34158","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4314,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-xj4xk\" "}
	{"level":"warn","ts":"2024-06-12T21:38:32.130682Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"588.698045ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-12T21:38:32.130725Z","caller":"traceutil/trace.go:171","msg":"trace[840756090] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:616; }","duration":"588.767089ms","start":"2024-06-12T21:38:31.541949Z","end":"2024-06-12T21:38:32.130716Z","steps":["trace[840756090] 'agreement among raft nodes before linearized reading'  (duration: 588.714723ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T21:38:32.130749Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T21:38:31.541932Z","time spent":"588.81189ms","remote":"127.0.0.1:33936","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-06-12T21:38:32.256839Z","caller":"traceutil/trace.go:171","msg":"trace[693569229] linearizableReadLoop","detail":"{readStateIndex:655; appliedIndex:654; }","duration":"119.130643ms","start":"2024-06-12T21:38:32.137689Z","end":"2024-06-12T21:38:32.256819Z","steps":["trace[693569229] 'read index received'  (duration: 116.972605ms)","trace[693569229] 'applied index is now lower than readState.Index'  (duration: 2.157264ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T21:38:32.25716Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.447154ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-376087\" ","response":"range_response_count:1 size:5801"}
	{"level":"info","ts":"2024-06-12T21:38:32.257223Z","caller":"traceutil/trace.go:171","msg":"trace[1571614967] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-376087; range_end:; response_count:1; response_revision:617; }","duration":"119.537407ms","start":"2024-06-12T21:38:32.137674Z","end":"2024-06-12T21:38:32.257211Z","steps":["trace[1571614967] 'agreement among raft nodes before linearized reading'  (duration: 119.28556ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T21:38:32.257493Z","caller":"traceutil/trace.go:171","msg":"trace[1073083622] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"121.161057ms","start":"2024-06-12T21:38:32.136318Z","end":"2024-06-12T21:38:32.257479Z","steps":["trace[1073083622] 'process raft request'  (duration: 118.388483ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T21:38:58.539665Z","caller":"traceutil/trace.go:171","msg":"trace[255011594] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"130.768276ms","start":"2024-06-12T21:38:58.408881Z","end":"2024-06-12T21:38:58.539649Z","steps":["trace[255011594] 'process raft request'  (duration: 130.526182ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T21:47:55.996594Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2024-06-12T21:47:56.00618Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":841,"took":"9.037743ms","hash":2086884593,"current-db-size-bytes":2600960,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2600960,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-06-12T21:47:56.006274Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2086884593,"revision":841,"compact-revision":-1}
	{"level":"info","ts":"2024-06-12T21:52:56.0034Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1084}
	{"level":"info","ts":"2024-06-12T21:52:56.007629Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1084,"took":"3.615809ms","hash":1005742164,"current-db-size-bytes":2600960,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1642496,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-06-12T21:52:56.00773Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1005742164,"revision":1084,"compact-revision":841}
	{"level":"info","ts":"2024-06-12T21:57:56.01171Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1328}
	{"level":"info","ts":"2024-06-12T21:57:56.016326Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1328,"took":"4.058659ms","hash":2148778002,"current-db-size-bytes":2600960,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1613824,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-06-12T21:57:56.016446Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2148778002,"revision":1328,"compact-revision":1084}
	{"level":"warn","ts":"2024-06-12T21:58:10.344789Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.380292ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6214562927279344718 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-xj4xk.17d85f858e9f8087\" mod_revision:1331 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-xj4xk.17d85f858e9f8087\" value_size:738 lease:6214562927279344715 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-xj4xk.17d85f858e9f8087\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-12T21:58:10.345154Z","caller":"traceutil/trace.go:171","msg":"trace[1700717425] transaction","detail":"{read_only:false; response_revision:1582; number_of_response:1; }","duration":"260.652085ms","start":"2024-06-12T21:58:10.084354Z","end":"2024-06-12T21:58:10.345007Z","steps":["trace[1700717425] 'process raft request'  (duration: 128.805605ms)","trace[1700717425] 'compare'  (duration: 131.208095ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-12T21:58:11.003517Z","caller":"traceutil/trace.go:171","msg":"trace[814911453] transaction","detail":"{read_only:false; response_revision:1583; number_of_response:1; }","duration":"172.840914ms","start":"2024-06-12T21:58:10.830653Z","end":"2024-06-12T21:58:11.003493Z","steps":["trace[814911453] 'process raft request'  (duration: 172.171739ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:58:45 up 21 min,  0 users,  load average: 0.48, 0.29, 0.16
	Linux default-k8s-diff-port-376087 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] <==
	I0612 21:52:58.351589       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:53:58.351298       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:53:58.351617       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:53:58.351668       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:53:58.351738       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:53:58.351790       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:53:58.353538       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:55:58.352949       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:55:58.353272       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:55:58.353343       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:55:58.354081       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:55:58.354208       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:55:58.355456       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:57:57.356173       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:57:57.356565       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0612 21:57:58.357191       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:57:58.357247       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:57:58.357260       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:57:58.357300       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:57:58.357345       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:57:58.358500       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] <==
	I0612 21:53:11.062980       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:53:40.513517       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:53:41.073950       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:54:10.519020       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:54:11.084834       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0612 21:54:15.016319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="196.89µs"
	I0612 21:54:30.005549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="140.286µs"
	E0612 21:54:40.523535       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:54:41.093005       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:55:10.528590       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:55:11.100170       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:55:40.535144       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:55:41.109781       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:56:10.540926       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:56:11.118117       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:56:40.545024       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:56:41.125624       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:57:10.550348       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:57:11.133616       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:57:40.554765       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:57:41.141572       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:58:10.560865       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:58:11.151564       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:58:40.566153       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:58:41.158676       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] <==
	I0612 21:37:58.800396       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:37:58.821980       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.80"]
	I0612 21:37:58.869431       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:37:58.869486       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:37:58.869502       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:37:58.872020       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:37:58.872274       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:37:58.872306       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:37:58.873924       1 config.go:192] "Starting service config controller"
	I0612 21:37:58.875128       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:37:58.875258       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:37:58.875281       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:37:58.877112       1 config.go:319] "Starting node config controller"
	I0612 21:37:58.877137       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:37:58.975438       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 21:37:58.978166       1 shared_informer.go:320] Caches are synced for node config
	I0612 21:37:58.978263       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] <==
	I0612 21:37:54.590891       1 serving.go:380] Generated self-signed cert in-memory
	W0612 21:37:57.303482       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0612 21:37:57.307173       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 21:37:57.307219       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0612 21:37:57.307228       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 21:37:57.395153       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 21:37:57.395241       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:37:57.399688       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 21:37:57.399723       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 21:37:57.400309       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 21:37:57.400856       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 21:37:57.500500       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:55:53 default-k8s-diff-port-376087 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:55:53 default-k8s-diff-port-376087 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:55:53 default-k8s-diff-port-376087 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:56:03 default-k8s-diff-port-376087 kubelet[942]: E0612 21:56:03.990954     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:56:17 default-k8s-diff-port-376087 kubelet[942]: E0612 21:56:17.991290     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:56:29 default-k8s-diff-port-376087 kubelet[942]: E0612 21:56:29.991649     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:56:44 default-k8s-diff-port-376087 kubelet[942]: E0612 21:56:44.991583     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:56:53 default-k8s-diff-port-376087 kubelet[942]: E0612 21:56:53.021155     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:56:53 default-k8s-diff-port-376087 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:56:53 default-k8s-diff-port-376087 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:56:53 default-k8s-diff-port-376087 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:56:53 default-k8s-diff-port-376087 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:56:58 default-k8s-diff-port-376087 kubelet[942]: E0612 21:56:58.991541     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:57:13 default-k8s-diff-port-376087 kubelet[942]: E0612 21:57:13.991558     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:57:26 default-k8s-diff-port-376087 kubelet[942]: E0612 21:57:26.991463     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:57:40 default-k8s-diff-port-376087 kubelet[942]: E0612 21:57:40.992297     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:57:53 default-k8s-diff-port-376087 kubelet[942]: E0612 21:57:53.022733     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:57:53 default-k8s-diff-port-376087 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:57:53 default-k8s-diff-port-376087 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:57:53 default-k8s-diff-port-376087 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:57:53 default-k8s-diff-port-376087 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:57:55 default-k8s-diff-port-376087 kubelet[942]: E0612 21:57:55.991881     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:58:09 default-k8s-diff-port-376087 kubelet[942]: E0612 21:58:09.990843     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:58:24 default-k8s-diff-port-376087 kubelet[942]: E0612 21:58:24.993361     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	Jun 12 21:58:38 default-k8s-diff-port-376087 kubelet[942]: E0612 21:58:38.991485     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-xj4xk" podUID="d3ac0cb2-602d-489c-baeb-fa9a363de8af"
	
	
	==> storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] <==
	I0612 21:38:29.302536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0612 21:38:29.316256       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0612 21:38:29.316360       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0612 21:38:46.719855       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0612 21:38:46.720127       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-376087_6c5e4abe-2bbe-4ec1-b343-97a3ac787a86!
	I0612 21:38:46.720751       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a4eeb3f-de04-466b-82c0-44d5f3aabecc", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-376087_6c5e4abe-2bbe-4ec1-b343-97a3ac787a86 became leader
	I0612 21:38:46.820391       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-376087_6c5e4abe-2bbe-4ec1-b343-97a3ac787a86!
	
	
	==> storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] <==
	I0612 21:37:58.538903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0612 21:38:28.546122       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-376087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-xj4xk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-376087 describe pod metrics-server-569cc877fc-xj4xk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-376087 describe pod metrics-server-569cc877fc-xj4xk: exit status 1 (62.165418ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-xj4xk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-376087 describe pod metrics-server-569cc877fc-xj4xk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (435.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (369.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-591460 -n embed-certs-591460
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-12 21:58:42.218002158 +0000 UTC m=+6473.832452536
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-591460 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-591460 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.992µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-591460 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-591460 -n embed-certs-591460
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-591460 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-591460 logs -n 25: (1.268584102s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-576552 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | disable-driver-mounts-576552                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-087875             | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-376087  | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-591460            | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-983302        | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-087875                  | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-376087       | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:42 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-591460                 | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-983302             | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:57 UTC | 12 Jun 24 21:57 UTC |
	| start   | -p newest-cni-007396 --memory=2200 --alsologtostderr   | newest-cni-007396            | jenkins | v1.33.1 | 12 Jun 24 21:57 UTC | 12 Jun 24 21:58 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:58 UTC | 12 Jun 24 21:58 UTC |
	| addons  | enable metrics-server -p newest-cni-007396             | newest-cni-007396            | jenkins | v1.33.1 | 12 Jun 24 21:58 UTC | 12 Jun 24 21:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-007396                                   | newest-cni-007396            | jenkins | v1.33.1 | 12 Jun 24 21:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:57:39
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:57:39.550876   86948 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:57:39.551091   86948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:57:39.551099   86948 out.go:304] Setting ErrFile to fd 2...
	I0612 21:57:39.551103   86948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:57:39.551305   86948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:57:39.551845   86948 out.go:298] Setting JSON to false
	I0612 21:57:39.552797   86948 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9605,"bootTime":1718219855,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:57:39.552852   86948 start.go:139] virtualization: kvm guest
	I0612 21:57:39.555092   86948 out.go:177] * [newest-cni-007396] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:57:39.556394   86948 notify.go:220] Checking for updates...
	I0612 21:57:39.556401   86948 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:57:39.557868   86948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:57:39.559183   86948 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:57:39.560464   86948 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:57:39.561707   86948 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:57:39.562862   86948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:57:39.564433   86948 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:57:39.564581   86948 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:57:39.564673   86948 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:57:39.564757   86948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:57:39.602527   86948 out.go:177] * Using the kvm2 driver based on user configuration
	I0612 21:57:39.603758   86948 start.go:297] selected driver: kvm2
	I0612 21:57:39.603773   86948 start.go:901] validating driver "kvm2" against <nil>
	I0612 21:57:39.603791   86948 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:57:39.604500   86948 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:57:39.604557   86948 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:57:39.619433   86948 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:57:39.619484   86948 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0612 21:57:39.619509   86948 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0612 21:57:39.619809   86948 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0612 21:57:39.619881   86948 cni.go:84] Creating CNI manager for ""
	I0612 21:57:39.619898   86948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:57:39.619906   86948 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 21:57:39.619980   86948 start.go:340] cluster config:
	{Name:newest-cni-007396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:57:39.620120   86948 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:57:39.622163   86948 out.go:177] * Starting "newest-cni-007396" primary control-plane node in "newest-cni-007396" cluster
	I0612 21:57:39.623198   86948 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:57:39.623233   86948 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 21:57:39.623239   86948 cache.go:56] Caching tarball of preloaded images
	I0612 21:57:39.623306   86948 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:57:39.623317   86948 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 21:57:39.623400   86948 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/config.json ...
	I0612 21:57:39.623415   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/config.json: {Name:mkddd57eb5daa435dc3b365b712f5a3c8140a077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:57:39.623523   86948 start.go:360] acquireMachinesLock for newest-cni-007396: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:57:39.623548   86948 start.go:364] duration metric: took 14.312µs to acquireMachinesLock for "newest-cni-007396"
	I0612 21:57:39.623561   86948 start.go:93] Provisioning new machine with config: &{Name:newest-cni-007396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:57:39.623612   86948 start.go:125] createHost starting for "" (driver="kvm2")
	I0612 21:57:39.625081   86948 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 21:57:39.625187   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:57:39.625223   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:57:39.639278   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0612 21:57:39.639724   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:57:39.640265   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:57:39.640286   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:57:39.640560   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:57:39.640759   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:57:39.640954   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:57:39.641113   86948 start.go:159] libmachine.API.Create for "newest-cni-007396" (driver="kvm2")
	I0612 21:57:39.641148   86948 client.go:168] LocalClient.Create starting
	I0612 21:57:39.641174   86948 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 21:57:39.641200   86948 main.go:141] libmachine: Decoding PEM data...
	I0612 21:57:39.641212   86948 main.go:141] libmachine: Parsing certificate...
	I0612 21:57:39.641270   86948 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 21:57:39.641290   86948 main.go:141] libmachine: Decoding PEM data...
	I0612 21:57:39.641303   86948 main.go:141] libmachine: Parsing certificate...
	I0612 21:57:39.641319   86948 main.go:141] libmachine: Running pre-create checks...
	I0612 21:57:39.641327   86948 main.go:141] libmachine: (newest-cni-007396) Calling .PreCreateCheck
	I0612 21:57:39.641700   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetConfigRaw
	I0612 21:57:39.642164   86948 main.go:141] libmachine: Creating machine...
	I0612 21:57:39.642181   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Create
	I0612 21:57:39.642316   86948 main.go:141] libmachine: (newest-cni-007396) Creating KVM machine...
	I0612 21:57:39.643669   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found existing default KVM network
	I0612 21:57:39.644988   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.644853   86970 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b9:6b:ca} reservation:<nil>}
	I0612 21:57:39.645969   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.645912   86970 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002b8150}
	I0612 21:57:39.646019   86948 main.go:141] libmachine: (newest-cni-007396) DBG | created network xml: 
	I0612 21:57:39.646043   86948 main.go:141] libmachine: (newest-cni-007396) DBG | <network>
	I0612 21:57:39.646054   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   <name>mk-newest-cni-007396</name>
	I0612 21:57:39.646066   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   <dns enable='no'/>
	I0612 21:57:39.646074   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   
	I0612 21:57:39.646080   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0612 21:57:39.646086   86948 main.go:141] libmachine: (newest-cni-007396) DBG |     <dhcp>
	I0612 21:57:39.646094   86948 main.go:141] libmachine: (newest-cni-007396) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0612 21:57:39.646102   86948 main.go:141] libmachine: (newest-cni-007396) DBG |     </dhcp>
	I0612 21:57:39.646109   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   </ip>
	I0612 21:57:39.646115   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   
	I0612 21:57:39.646125   86948 main.go:141] libmachine: (newest-cni-007396) DBG | </network>
	I0612 21:57:39.646152   86948 main.go:141] libmachine: (newest-cni-007396) DBG | 
	I0612 21:57:39.652264   86948 main.go:141] libmachine: (newest-cni-007396) DBG | trying to create private KVM network mk-newest-cni-007396 192.168.50.0/24...
	I0612 21:57:39.722112   86948 main.go:141] libmachine: (newest-cni-007396) DBG | private KVM network mk-newest-cni-007396 192.168.50.0/24 created
	I0612 21:57:39.722210   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.722103   86970 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:57:39.722240   86948 main.go:141] libmachine: (newest-cni-007396) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396 ...
	I0612 21:57:39.722309   86948 main.go:141] libmachine: (newest-cni-007396) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 21:57:39.722340   86948 main.go:141] libmachine: (newest-cni-007396) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 21:57:39.949912   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.949748   86970 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa...
	I0612 21:57:40.367958   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:40.367803   86970 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/newest-cni-007396.rawdisk...
	I0612 21:57:40.367993   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Writing magic tar header
	I0612 21:57:40.368005   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Writing SSH key tar header
	I0612 21:57:40.368014   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:40.367917   86970 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396 ...
	I0612 21:57:40.368030   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396
	I0612 21:57:40.368039   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 21:57:40.368052   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396 (perms=drwx------)
	I0612 21:57:40.368066   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 21:57:40.368080   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 21:57:40.368097   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 21:57:40.368106   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 21:57:40.368143   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:57:40.368168   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 21:57:40.368175   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 21:57:40.368184   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 21:57:40.368191   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins
	I0612 21:57:40.368216   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home
	I0612 21:57:40.368230   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Skipping /home - not owner
	I0612 21:57:40.368244   86948 main.go:141] libmachine: (newest-cni-007396) Creating domain...
	I0612 21:57:40.369412   86948 main.go:141] libmachine: (newest-cni-007396) define libvirt domain using xml: 
	I0612 21:57:40.369429   86948 main.go:141] libmachine: (newest-cni-007396) <domain type='kvm'>
	I0612 21:57:40.369436   86948 main.go:141] libmachine: (newest-cni-007396)   <name>newest-cni-007396</name>
	I0612 21:57:40.369441   86948 main.go:141] libmachine: (newest-cni-007396)   <memory unit='MiB'>2200</memory>
	I0612 21:57:40.369447   86948 main.go:141] libmachine: (newest-cni-007396)   <vcpu>2</vcpu>
	I0612 21:57:40.369455   86948 main.go:141] libmachine: (newest-cni-007396)   <features>
	I0612 21:57:40.369463   86948 main.go:141] libmachine: (newest-cni-007396)     <acpi/>
	I0612 21:57:40.369474   86948 main.go:141] libmachine: (newest-cni-007396)     <apic/>
	I0612 21:57:40.369483   86948 main.go:141] libmachine: (newest-cni-007396)     <pae/>
	I0612 21:57:40.369495   86948 main.go:141] libmachine: (newest-cni-007396)     
	I0612 21:57:40.369504   86948 main.go:141] libmachine: (newest-cni-007396)   </features>
	I0612 21:57:40.369520   86948 main.go:141] libmachine: (newest-cni-007396)   <cpu mode='host-passthrough'>
	I0612 21:57:40.369553   86948 main.go:141] libmachine: (newest-cni-007396)   
	I0612 21:57:40.369579   86948 main.go:141] libmachine: (newest-cni-007396)   </cpu>
	I0612 21:57:40.369590   86948 main.go:141] libmachine: (newest-cni-007396)   <os>
	I0612 21:57:40.369597   86948 main.go:141] libmachine: (newest-cni-007396)     <type>hvm</type>
	I0612 21:57:40.369622   86948 main.go:141] libmachine: (newest-cni-007396)     <boot dev='cdrom'/>
	I0612 21:57:40.369631   86948 main.go:141] libmachine: (newest-cni-007396)     <boot dev='hd'/>
	I0612 21:57:40.369636   86948 main.go:141] libmachine: (newest-cni-007396)     <bootmenu enable='no'/>
	I0612 21:57:40.369643   86948 main.go:141] libmachine: (newest-cni-007396)   </os>
	I0612 21:57:40.369650   86948 main.go:141] libmachine: (newest-cni-007396)   <devices>
	I0612 21:57:40.369668   86948 main.go:141] libmachine: (newest-cni-007396)     <disk type='file' device='cdrom'>
	I0612 21:57:40.369685   86948 main.go:141] libmachine: (newest-cni-007396)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/boot2docker.iso'/>
	I0612 21:57:40.369701   86948 main.go:141] libmachine: (newest-cni-007396)       <target dev='hdc' bus='scsi'/>
	I0612 21:57:40.369713   86948 main.go:141] libmachine: (newest-cni-007396)       <readonly/>
	I0612 21:57:40.369719   86948 main.go:141] libmachine: (newest-cni-007396)     </disk>
	I0612 21:57:40.369725   86948 main.go:141] libmachine: (newest-cni-007396)     <disk type='file' device='disk'>
	I0612 21:57:40.369734   86948 main.go:141] libmachine: (newest-cni-007396)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 21:57:40.369766   86948 main.go:141] libmachine: (newest-cni-007396)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/newest-cni-007396.rawdisk'/>
	I0612 21:57:40.369805   86948 main.go:141] libmachine: (newest-cni-007396)       <target dev='hda' bus='virtio'/>
	I0612 21:57:40.369819   86948 main.go:141] libmachine: (newest-cni-007396)     </disk>
	I0612 21:57:40.369832   86948 main.go:141] libmachine: (newest-cni-007396)     <interface type='network'>
	I0612 21:57:40.369846   86948 main.go:141] libmachine: (newest-cni-007396)       <source network='mk-newest-cni-007396'/>
	I0612 21:57:40.369857   86948 main.go:141] libmachine: (newest-cni-007396)       <model type='virtio'/>
	I0612 21:57:40.369868   86948 main.go:141] libmachine: (newest-cni-007396)     </interface>
	I0612 21:57:40.369884   86948 main.go:141] libmachine: (newest-cni-007396)     <interface type='network'>
	I0612 21:57:40.369900   86948 main.go:141] libmachine: (newest-cni-007396)       <source network='default'/>
	I0612 21:57:40.369911   86948 main.go:141] libmachine: (newest-cni-007396)       <model type='virtio'/>
	I0612 21:57:40.369918   86948 main.go:141] libmachine: (newest-cni-007396)     </interface>
	I0612 21:57:40.369927   86948 main.go:141] libmachine: (newest-cni-007396)     <serial type='pty'>
	I0612 21:57:40.369935   86948 main.go:141] libmachine: (newest-cni-007396)       <target port='0'/>
	I0612 21:57:40.369947   86948 main.go:141] libmachine: (newest-cni-007396)     </serial>
	I0612 21:57:40.369954   86948 main.go:141] libmachine: (newest-cni-007396)     <console type='pty'>
	I0612 21:57:40.369967   86948 main.go:141] libmachine: (newest-cni-007396)       <target type='serial' port='0'/>
	I0612 21:57:40.369977   86948 main.go:141] libmachine: (newest-cni-007396)     </console>
	I0612 21:57:40.369986   86948 main.go:141] libmachine: (newest-cni-007396)     <rng model='virtio'>
	I0612 21:57:40.369995   86948 main.go:141] libmachine: (newest-cni-007396)       <backend model='random'>/dev/random</backend>
	I0612 21:57:40.370002   86948 main.go:141] libmachine: (newest-cni-007396)     </rng>
	I0612 21:57:40.370016   86948 main.go:141] libmachine: (newest-cni-007396)     
	I0612 21:57:40.370026   86948 main.go:141] libmachine: (newest-cni-007396)     
	I0612 21:57:40.370036   86948 main.go:141] libmachine: (newest-cni-007396)   </devices>
	I0612 21:57:40.370046   86948 main.go:141] libmachine: (newest-cni-007396) </domain>
	I0612 21:57:40.370060   86948 main.go:141] libmachine: (newest-cni-007396) 
	I0612 21:57:40.374484   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:ac:61:40 in network default
	I0612 21:57:40.375055   86948 main.go:141] libmachine: (newest-cni-007396) Ensuring networks are active...
	I0612 21:57:40.375074   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:40.375755   86948 main.go:141] libmachine: (newest-cni-007396) Ensuring network default is active
	I0612 21:57:40.376055   86948 main.go:141] libmachine: (newest-cni-007396) Ensuring network mk-newest-cni-007396 is active
	I0612 21:57:40.376588   86948 main.go:141] libmachine: (newest-cni-007396) Getting domain xml...
	I0612 21:57:40.377311   86948 main.go:141] libmachine: (newest-cni-007396) Creating domain...
	I0612 21:57:41.646694   86948 main.go:141] libmachine: (newest-cni-007396) Waiting to get IP...
	I0612 21:57:41.647535   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:41.647983   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:41.648009   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:41.647967   86970 retry.go:31] will retry after 232.64418ms: waiting for machine to come up
	I0612 21:57:41.882517   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:41.883132   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:41.883162   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:41.883063   86970 retry.go:31] will retry after 300.678306ms: waiting for machine to come up
	I0612 21:57:42.185385   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:42.185837   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:42.185867   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:42.185788   86970 retry.go:31] will retry after 322.355198ms: waiting for machine to come up
	I0612 21:57:42.509318   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:42.509851   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:42.509874   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:42.509823   86970 retry.go:31] will retry after 383.48604ms: waiting for machine to come up
	I0612 21:57:42.895499   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:42.896051   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:42.896083   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:42.896000   86970 retry.go:31] will retry after 681.668123ms: waiting for machine to come up
	I0612 21:57:43.579089   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:43.579655   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:43.579692   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:43.579608   86970 retry.go:31] will retry after 772.173706ms: waiting for machine to come up
	I0612 21:57:44.353493   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:44.353942   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:44.353965   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:44.353889   86970 retry.go:31] will retry after 1.081187064s: waiting for machine to come up
	I0612 21:57:45.436451   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:45.436949   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:45.436977   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:45.436901   86970 retry.go:31] will retry after 1.312080042s: waiting for machine to come up
	I0612 21:57:46.751288   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:46.751800   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:46.751823   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:46.751758   86970 retry.go:31] will retry after 1.211250846s: waiting for machine to come up
	I0612 21:57:47.964813   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:47.965255   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:47.965280   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:47.965195   86970 retry.go:31] will retry after 1.673381258s: waiting for machine to come up
	I0612 21:57:49.640173   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:49.640641   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:49.640664   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:49.640609   86970 retry.go:31] will retry after 1.995026566s: waiting for machine to come up
	I0612 21:57:51.638102   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:51.638614   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:51.638639   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:51.638561   86970 retry.go:31] will retry after 3.197679013s: waiting for machine to come up
	I0612 21:57:54.837509   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:54.838000   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:54.838028   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:54.837956   86970 retry.go:31] will retry after 3.462181977s: waiting for machine to come up
	I0612 21:57:58.304412   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:58.304897   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:58.304931   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:58.304819   86970 retry.go:31] will retry after 3.755357309s: waiting for machine to come up
	I0612 21:58:02.062774   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.063322   86948 main.go:141] libmachine: (newest-cni-007396) Found IP for machine: 192.168.50.207
	I0612 21:58:02.063351   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has current primary IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.063381   86948 main.go:141] libmachine: (newest-cni-007396) Reserving static IP address...
	I0612 21:58:02.063736   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find host DHCP lease matching {name: "newest-cni-007396", mac: "52:54:00:a5:e1:fb", ip: "192.168.50.207"} in network mk-newest-cni-007396
	I0612 21:58:02.146932   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Getting to WaitForSSH function...
	I0612 21:58:02.146965   86948 main.go:141] libmachine: (newest-cni-007396) Reserved static IP address: 192.168.50.207
	I0612 21:58:02.146979   86948 main.go:141] libmachine: (newest-cni-007396) Waiting for SSH to be available...
	I0612 21:58:02.149790   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.150289   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.150323   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.150483   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Using SSH client type: external
	I0612 21:58:02.150512   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa (-rw-------)
	I0612 21:58:02.150548   86948 main.go:141] libmachine: (newest-cni-007396) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:58:02.150565   86948 main.go:141] libmachine: (newest-cni-007396) DBG | About to run SSH command:
	I0612 21:58:02.150580   86948 main.go:141] libmachine: (newest-cni-007396) DBG | exit 0
	I0612 21:58:02.279618   86948 main.go:141] libmachine: (newest-cni-007396) DBG | SSH cmd err, output: <nil>: 
	I0612 21:58:02.279899   86948 main.go:141] libmachine: (newest-cni-007396) KVM machine creation complete!
	I0612 21:58:02.280217   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetConfigRaw
	I0612 21:58:02.280700   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:02.280886   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:02.281060   86948 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 21:58:02.281077   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetState
	I0612 21:58:02.282541   86948 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 21:58:02.282554   86948 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 21:58:02.282560   86948 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 21:58:02.282566   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.285113   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.285505   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.285535   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.285681   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.285880   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.286029   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.286215   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.286406   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.286581   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.286594   86948 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 21:58:02.394673   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:58:02.394702   86948 main.go:141] libmachine: Detecting the provisioner...
	I0612 21:58:02.394714   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.397514   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.397799   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.397821   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.397989   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.398190   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.398390   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.398545   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.398715   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.398921   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.398932   86948 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 21:58:02.504115   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 21:58:02.504176   86948 main.go:141] libmachine: found compatible host: buildroot
	I0612 21:58:02.504183   86948 main.go:141] libmachine: Provisioning with buildroot...
	I0612 21:58:02.504190   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:58:02.504433   86948 buildroot.go:166] provisioning hostname "newest-cni-007396"
	I0612 21:58:02.504459   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:58:02.504702   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.508127   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.508526   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.508555   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.508732   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.508920   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.509065   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.509177   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.509332   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.509586   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.509607   86948 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-007396 && echo "newest-cni-007396" | sudo tee /etc/hostname
	I0612 21:58:02.630796   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-007396
	
	I0612 21:58:02.630828   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.633959   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.634507   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.634545   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.634710   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.634901   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.635104   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.635310   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.635497   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.635697   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.635723   86948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-007396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-007396/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-007396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:58:02.754971   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:58:02.755003   86948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:58:02.755025   86948 buildroot.go:174] setting up certificates
	I0612 21:58:02.755037   86948 provision.go:84] configureAuth start
	I0612 21:58:02.755049   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:58:02.755367   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetIP
	I0612 21:58:02.757918   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.758342   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.758374   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.758471   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.761085   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.761409   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.761437   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.761582   86948 provision.go:143] copyHostCerts
	I0612 21:58:02.761670   86948 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:58:02.761680   86948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:58:02.761744   86948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:58:02.761842   86948 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:58:02.761850   86948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:58:02.761872   86948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:58:02.761932   86948 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:58:02.761939   86948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:58:02.761959   86948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:58:02.762037   86948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.newest-cni-007396 san=[127.0.0.1 192.168.50.207 localhost minikube newest-cni-007396]
	I0612 21:58:02.983584   86948 provision.go:177] copyRemoteCerts
	I0612 21:58:02.983643   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:58:02.983665   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.986420   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.986728   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.986767   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.986935   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.987149   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.987356   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.987507   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.069906   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0612 21:58:03.095863   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:58:03.124797   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:58:03.149919   86948 provision.go:87] duration metric: took 394.869081ms to configureAuth
	I0612 21:58:03.149945   86948 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:58:03.150170   86948 config.go:182] Loaded profile config "newest-cni-007396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:58:03.150272   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.153322   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.153699   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.153737   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.153974   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.154243   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.154441   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.154623   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.154845   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:03.154995   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:03.155009   86948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:58:03.430020   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:58:03.430053   86948 main.go:141] libmachine: Checking connection to Docker...
	I0612 21:58:03.430064   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetURL
	I0612 21:58:03.431420   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Using libvirt version 6000000
	I0612 21:58:03.433660   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.434051   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.434083   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.434223   86948 main.go:141] libmachine: Docker is up and running!
	I0612 21:58:03.434238   86948 main.go:141] libmachine: Reticulating splines...
	I0612 21:58:03.434247   86948 client.go:171] duration metric: took 23.793089795s to LocalClient.Create
	I0612 21:58:03.434273   86948 start.go:167] duration metric: took 23.793159772s to libmachine.API.Create "newest-cni-007396"
	I0612 21:58:03.434286   86948 start.go:293] postStartSetup for "newest-cni-007396" (driver="kvm2")
	I0612 21:58:03.434298   86948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:58:03.434317   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.434571   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:58:03.434594   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.436668   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.436966   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.436998   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.437209   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.437409   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.437582   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.437706   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.526365   86948 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:58:03.530621   86948 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:58:03.530646   86948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:58:03.530713   86948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:58:03.531006   86948 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:58:03.531139   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:58:03.541890   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:58:03.567793   86948 start.go:296] duration metric: took 133.495039ms for postStartSetup
	I0612 21:58:03.567838   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetConfigRaw
	I0612 21:58:03.568519   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetIP
	I0612 21:58:03.571244   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.571648   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.571675   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.571966   86948 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/config.json ...
	I0612 21:58:03.572180   86948 start.go:128] duration metric: took 23.948557924s to createHost
	I0612 21:58:03.572207   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.574448   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.574799   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.574824   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.575004   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.575225   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.575414   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.575577   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.575750   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:03.575947   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:03.575960   86948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:58:03.680255   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718229483.653291457
	
	I0612 21:58:03.680279   86948 fix.go:216] guest clock: 1718229483.653291457
	I0612 21:58:03.680288   86948 fix.go:229] Guest: 2024-06-12 21:58:03.653291457 +0000 UTC Remote: 2024-06-12 21:58:03.572192588 +0000 UTC m=+24.058769808 (delta=81.098869ms)
	I0612 21:58:03.680348   86948 fix.go:200] guest clock delta is within tolerance: 81.098869ms
	I0612 21:58:03.680359   86948 start.go:83] releasing machines lock for "newest-cni-007396", held for 24.056803081s
	I0612 21:58:03.680388   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.680651   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetIP
	I0612 21:58:03.683199   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.683495   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.683520   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.683694   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.684217   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.684420   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.684511   86948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:58:03.684561   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.684619   86948 ssh_runner.go:195] Run: cat /version.json
	I0612 21:58:03.684642   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.687373   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.687651   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.687709   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.687765   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.687870   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.688095   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.688146   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.688172   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.688279   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.688389   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.688453   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.688521   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.688685   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.688838   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.764995   86948 ssh_runner.go:195] Run: systemctl --version
	I0612 21:58:03.787664   86948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:58:03.948904   86948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:58:03.955287   86948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:58:03.955368   86948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:58:03.973537   86948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:58:03.973563   86948 start.go:494] detecting cgroup driver to use...
	I0612 21:58:03.973630   86948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:58:03.991002   86948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:58:04.004854   86948 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:58:04.004913   86948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:58:04.019058   86948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:58:04.032658   86948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:58:04.158544   86948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:58:04.315596   86948 docker.go:233] disabling docker service ...
	I0612 21:58:04.315682   86948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:58:04.333215   86948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:58:04.350500   86948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:58:04.497343   86948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:58:04.640728   86948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:58:04.668553   86948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:58:04.691878   86948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:58:04.691939   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.706849   86948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:58:04.706901   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.717640   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.729069   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.741733   86948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:58:04.754037   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.765874   86948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.785919   86948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:58:04.797651   86948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:58:04.807726   86948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:58:04.807786   86948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:58:04.821239   86948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:58:04.835092   86948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:58:04.982309   86948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:58:05.139997   86948 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:58:05.140070   86948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:58:05.146463   86948 start.go:562] Will wait 60s for crictl version
	I0612 21:58:05.146517   86948 ssh_runner.go:195] Run: which crictl
	I0612 21:58:05.150978   86948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:58:05.200770   86948 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:58:05.200843   86948 ssh_runner.go:195] Run: crio --version
	I0612 21:58:05.233305   86948 ssh_runner.go:195] Run: crio --version
	I0612 21:58:05.271552   86948 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:58:05.272867   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetIP
	I0612 21:58:05.275387   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:05.275787   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:05.275820   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:05.275981   86948 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0612 21:58:05.280392   86948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:58:05.297132   86948 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0612 21:58:05.298554   86948 kubeadm.go:877] updating cluster {Name:newest-cni-007396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:58:05.298678   86948 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:58:05.298737   86948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:58:05.337708   86948 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:58:05.337763   86948 ssh_runner.go:195] Run: which lz4
	I0612 21:58:05.341928   86948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:58:05.346383   86948 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:58:05.346413   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:58:06.865952   86948 crio.go:462] duration metric: took 1.524051425s to copy over tarball
	I0612 21:58:06.866020   86948 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:58:09.120553   86948 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.254511001s)
	I0612 21:58:09.120579   86948 crio.go:469] duration metric: took 2.254598258s to extract the tarball
	I0612 21:58:09.120589   86948 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:58:09.160964   86948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:58:09.211479   86948 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:58:09.211501   86948 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:58:09.211508   86948 kubeadm.go:928] updating node { 192.168.50.207 8443 v1.30.1 crio true true} ...
	I0612 21:58:09.211628   86948 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-007396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:58:09.211712   86948 ssh_runner.go:195] Run: crio config
	I0612 21:58:09.264731   86948 cni.go:84] Creating CNI manager for ""
	I0612 21:58:09.264750   86948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:58:09.264757   86948 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0612 21:58:09.264778   86948 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.207 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-007396 NodeName:newest-cni-007396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:58:09.264915   86948 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-007396"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:58:09.264972   86948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:58:09.275107   86948 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:58:09.275189   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:58:09.284547   86948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0612 21:58:09.301703   86948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:58:09.318529   86948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0612 21:58:09.335761   86948 ssh_runner.go:195] Run: grep 192.168.50.207	control-plane.minikube.internal$ /etc/hosts
	I0612 21:58:09.340128   86948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:58:09.354191   86948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:58:09.489939   86948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:58:09.508379   86948 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396 for IP: 192.168.50.207
	I0612 21:58:09.508400   86948 certs.go:194] generating shared ca certs ...
	I0612 21:58:09.508419   86948 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.508563   86948 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:58:09.508626   86948 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:58:09.508641   86948 certs.go:256] generating profile certs ...
	I0612 21:58:09.508708   86948 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.key
	I0612 21:58:09.508729   86948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.crt with IP's: []
	I0612 21:58:09.646440   86948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.crt ...
	I0612 21:58:09.646468   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.crt: {Name:mkc8d2681965bb16e4abe8bad19c8322752630f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.646660   86948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.key ...
	I0612 21:58:09.646675   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/client.key: {Name:mkfea61ee91e6b012e734ab300bc57a95ec6dee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.646759   86948 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key.7c9e52d7
	I0612 21:58:09.646774   86948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt.7c9e52d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.207]
	I0612 21:58:09.781803   86948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt.7c9e52d7 ...
	I0612 21:58:09.781837   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt.7c9e52d7: {Name:mkf4dc4131392447b68af9b8a04ac3d6e5d9d16f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.782056   86948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key.7c9e52d7 ...
	I0612 21:58:09.782090   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key.7c9e52d7: {Name:mk98e37ee3f5da6e372801d2604565c36364469a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.782208   86948 certs.go:381] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt.7c9e52d7 -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt
	I0612 21:58:09.782322   86948 certs.go:385] copying /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key.7c9e52d7 -> /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key
	I0612 21:58:09.782385   86948 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.key
	I0612 21:58:09.782411   86948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.crt with IP's: []
	I0612 21:58:09.920251   86948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.crt ...
	I0612 21:58:09.920276   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.crt: {Name:mke1aa3213902e5b9f72aa2b601c889050adacc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.920445   86948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.key ...
	I0612 21:58:09.920461   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.key: {Name:mk9212b5a154365129543410a8c5012b30573116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:09.920673   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:58:09.920708   86948 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:58:09.920718   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:58:09.920741   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:58:09.920761   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:58:09.920784   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:58:09.920818   86948 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:58:09.921501   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:58:09.951234   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:58:09.976768   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:58:10.000677   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:58:10.027609   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 21:58:10.053069   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 21:58:10.080270   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:58:10.106927   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:58:10.132480   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:58:10.159497   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:58:10.189262   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:58:10.214862   86948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:58:10.233571   86948 ssh_runner.go:195] Run: openssl version
	I0612 21:58:10.239298   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:58:10.250683   86948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:58:10.255295   86948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:58:10.255357   86948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:58:10.261255   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:58:10.272803   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:58:10.289408   86948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:58:10.294212   86948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:58:10.294267   86948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:58:10.302667   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:58:10.321450   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:58:10.335517   86948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:58:10.341419   86948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:58:10.341488   86948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:58:10.350555   86948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:58:10.362759   86948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:58:10.369029   86948 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 21:58:10.369088   86948 kubeadm.go:391] StartCluster: {Name:newest-cni-007396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:58:10.369171   86948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:58:10.369229   86948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:58:10.406297   86948 cri.go:89] found id: ""
	I0612 21:58:10.406376   86948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 21:58:10.416929   86948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:58:10.426929   86948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:58:10.436717   86948 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:58:10.436743   86948 kubeadm.go:156] found existing configuration files:
	
	I0612 21:58:10.436792   86948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:58:10.446501   86948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:58:10.446560   86948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:58:10.456054   86948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:58:10.465116   86948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:58:10.465165   86948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:58:10.474674   86948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:58:10.484274   86948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:58:10.484315   86948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:58:10.494358   86948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:58:10.503951   86948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:58:10.503999   86948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:58:10.513541   86948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:58:10.621012   86948 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:58:10.621129   86948 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:58:10.749156   86948 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:58:10.749308   86948 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:58:10.749442   86948 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:58:10.987184   86948 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:58:11.117107   86948 out.go:204]   - Generating certificates and keys ...
	I0612 21:58:11.117241   86948 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:58:11.117335   86948 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:58:11.117426   86948 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 21:58:11.332874   86948 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 21:58:11.794187   86948 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 21:58:11.915133   86948 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 21:58:12.182141   86948 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 21:58:12.182380   86948 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-007396] and IPs [192.168.50.207 127.0.0.1 ::1]
	I0612 21:58:12.590048   86948 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 21:58:12.590278   86948 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-007396] and IPs [192.168.50.207 127.0.0.1 ::1]
	I0612 21:58:12.689980   86948 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 21:58:12.865854   86948 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 21:58:12.947581   86948 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 21:58:12.947883   86948 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:58:13.141280   86948 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:58:13.330698   86948 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:58:13.405686   86948 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:58:13.489125   86948 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:58:13.617590   86948 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:58:13.618344   86948 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:58:13.622803   86948 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:58:13.624841   86948 out.go:204]   - Booting up control plane ...
	I0612 21:58:13.624928   86948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:58:13.625029   86948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:58:13.625461   86948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:58:13.641310   86948 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:58:13.643459   86948 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:58:13.643572   86948 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:58:13.774077   86948 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:58:13.774205   86948 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:58:14.775671   86948 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002167299s
	I0612 21:58:14.775770   86948 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:58:20.274766   86948 kubeadm.go:309] [api-check] The API server is healthy after 5.501170917s
	I0612 21:58:20.293180   86948 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:58:20.313804   86948 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:58:20.358713   86948 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:58:20.358977   86948 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-007396 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:58:20.376957   86948 kubeadm.go:309] [bootstrap-token] Using token: ap57h1.bcf4gjm029dmbwa9
	I0612 21:58:20.378627   86948 out.go:204]   - Configuring RBAC rules ...
	I0612 21:58:20.378811   86948 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:58:20.389584   86948 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:58:20.402127   86948 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:58:20.414966   86948 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:58:20.424366   86948 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:58:20.434058   86948 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:58:20.681506   86948 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:58:21.123454   86948 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:58:21.681294   86948 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:58:21.681351   86948 kubeadm.go:309] 
	I0612 21:58:21.681444   86948 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:58:21.681456   86948 kubeadm.go:309] 
	I0612 21:58:21.681563   86948 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:58:21.681574   86948 kubeadm.go:309] 
	I0612 21:58:21.681627   86948 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:58:21.681716   86948 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:58:21.681783   86948 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:58:21.681793   86948 kubeadm.go:309] 
	I0612 21:58:21.681874   86948 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:58:21.681887   86948 kubeadm.go:309] 
	I0612 21:58:21.681943   86948 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:58:21.681951   86948 kubeadm.go:309] 
	I0612 21:58:21.682016   86948 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:58:21.682119   86948 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:58:21.682234   86948 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:58:21.682246   86948 kubeadm.go:309] 
	I0612 21:58:21.682380   86948 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:58:21.682499   86948 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:58:21.682513   86948 kubeadm.go:309] 
	I0612 21:58:21.682639   86948 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ap57h1.bcf4gjm029dmbwa9 \
	I0612 21:58:21.682793   86948 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:58:21.682825   86948 kubeadm.go:309] 	--control-plane 
	I0612 21:58:21.682835   86948 kubeadm.go:309] 
	I0612 21:58:21.682950   86948 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:58:21.682962   86948 kubeadm.go:309] 
	I0612 21:58:21.683106   86948 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ap57h1.bcf4gjm029dmbwa9 \
	I0612 21:58:21.683259   86948 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:58:21.683429   86948 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:58:21.683458   86948 cni.go:84] Creating CNI manager for ""
	I0612 21:58:21.683472   86948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:58:21.685477   86948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:58:21.686802   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:58:21.700191   86948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:58:21.722111   86948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:58:21.722176   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:21.722202   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-007396 minikube.k8s.io/updated_at=2024_06_12T21_58_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=newest-cni-007396 minikube.k8s.io/primary=true
	I0612 21:58:21.953387   86948 ops.go:34] apiserver oom_adj: -16
	I0612 21:58:21.953438   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:22.454537   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:22.954181   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:23.453931   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:23.953994   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:24.454407   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:24.954182   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:25.454300   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:25.953518   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:26.453740   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:26.953940   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:27.454030   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:27.954217   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:28.454157   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:28.953544   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:29.453862   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:29.953973   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:30.453562   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:30.953669   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:31.453454   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:31.953594   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:32.454081   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:32.953549   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:33.454345   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:33.954284   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:34.454408   86948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:58:34.547344   86948 kubeadm.go:1107] duration metric: took 12.825231402s to wait for elevateKubeSystemPrivileges
	W0612 21:58:34.547385   86948 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:58:34.547396   86948 kubeadm.go:393] duration metric: took 24.178318758s to StartCluster
	I0612 21:58:34.547414   86948 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:34.547495   86948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:58:34.549447   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:58:34.549652   86948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0612 21:58:34.549667   86948 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:58:34.551622   86948 out.go:177] * Verifying Kubernetes components...
	I0612 21:58:34.549873   86948 config.go:182] Loaded profile config "newest-cni-007396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:58:34.549749   86948 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:58:34.554137   86948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:58:34.552957   86948 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-007396"
	I0612 21:58:34.554235   86948 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-007396"
	I0612 21:58:34.552971   86948 addons.go:69] Setting default-storageclass=true in profile "newest-cni-007396"
	I0612 21:58:34.554273   86948 host.go:66] Checking if "newest-cni-007396" exists ...
	I0612 21:58:34.554293   86948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-007396"
	I0612 21:58:34.554631   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:58:34.554631   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:58:34.554656   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:58:34.554668   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:58:34.570456   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
	I0612 21:58:34.570653   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
	I0612 21:58:34.570992   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:58:34.571115   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:58:34.571524   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:58:34.571547   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:58:34.571687   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:58:34.571706   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:58:34.571921   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:58:34.572087   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:58:34.572427   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:58:34.572455   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:58:34.572733   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetState
	I0612 21:58:34.577156   86948 addons.go:234] Setting addon default-storageclass=true in "newest-cni-007396"
	I0612 21:58:34.577201   86948 host.go:66] Checking if "newest-cni-007396" exists ...
	I0612 21:58:34.577545   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:58:34.577573   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:58:34.589306   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36275
	I0612 21:58:34.589759   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:58:34.590343   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:58:34.590370   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:58:34.590714   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:58:34.590945   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetState
	I0612 21:58:34.592939   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:34.595099   86948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:58:34.593962   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0612 21:58:34.595969   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:58:34.596615   86948 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:58:34.596631   86948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:58:34.596645   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:34.597442   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:58:34.597473   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:58:34.597840   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:58:34.598500   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:58:34.598543   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:58:34.600313   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:34.600744   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:34.600770   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:34.601082   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:34.601281   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:34.601433   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:34.601586   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:34.613275   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I0612 21:58:34.613677   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:58:34.614115   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:58:34.614133   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:58:34.614422   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:58:34.614591   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetState
	I0612 21:58:34.616343   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:34.616566   86948 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:58:34.616582   86948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:58:34.616600   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:34.619820   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:34.620104   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:34.620124   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:34.620268   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:34.620388   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:34.620490   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:34.620589   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:34.849923   86948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:58:34.849967   86948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0612 21:58:34.966129   86948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:58:35.029466   86948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:58:35.369654   86948 start.go:946] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0612 21:58:35.371980   86948 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:58:35.372059   86948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:58:35.695766   86948 main.go:141] libmachine: Making call to close driver server
	I0612 21:58:35.695798   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Close
	I0612 21:58:35.695855   86948 api_server.go:72] duration metric: took 1.146158724s to wait for apiserver process to appear ...
	I0612 21:58:35.695873   86948 main.go:141] libmachine: Making call to close driver server
	I0612 21:58:35.695887   86948 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:58:35.695899   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Close
	I0612 21:58:35.695911   86948 api_server.go:253] Checking apiserver healthz at https://192.168.50.207:8443/healthz ...
	I0612 21:58:35.696286   86948 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:58:35.696298   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Closing plugin on server side
	I0612 21:58:35.696300   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Closing plugin on server side
	I0612 21:58:35.696305   86948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:58:35.696354   86948 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:58:35.696318   86948 main.go:141] libmachine: Making call to close driver server
	I0612 21:58:35.696395   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Close
	I0612 21:58:35.696378   86948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:58:35.696507   86948 main.go:141] libmachine: Making call to close driver server
	I0612 21:58:35.696516   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Close
	I0612 21:58:35.696787   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Closing plugin on server side
	I0612 21:58:35.696803   86948 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:58:35.696815   86948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:58:35.696827   86948 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:58:35.696834   86948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:58:35.696833   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Closing plugin on server side
	I0612 21:58:35.707110   86948 api_server.go:279] https://192.168.50.207:8443/healthz returned 200:
	ok
	I0612 21:58:35.708912   86948 api_server.go:141] control plane version: v1.30.1
	I0612 21:58:35.708937   86948 api_server.go:131] duration metric: took 13.041257ms to wait for apiserver health ...
	I0612 21:58:35.708947   86948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:58:35.732416   86948 system_pods.go:59] 8 kube-system pods found
	I0612 21:58:35.732462   86948 system_pods.go:61] "coredns-7db6d8ff4d-7996b" [02830689-7662-464f-8a55-e553a984dc5b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:58:35.732474   86948 system_pods.go:61] "coredns-7db6d8ff4d-l5xd5" [e9382fd3-c07c-4eab-8813-a1fb72cf297b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:58:35.732481   86948 system_pods.go:61] "etcd-newest-cni-007396" [bd4b8459-da7a-4439-9880-5bdaadf89146] Running
	I0612 21:58:35.732488   86948 system_pods.go:61] "kube-apiserver-newest-cni-007396" [39eddcf8-9a17-44d6-a141-bdb000607a82] Running
	I0612 21:58:35.732495   86948 system_pods.go:61] "kube-controller-manager-newest-cni-007396" [e6f9fb22-bdda-44cd-bc5f-c51bb7addde0] Running
	I0612 21:58:35.732502   86948 system_pods.go:61] "kube-proxy-j972w" [fb2fd5fd-9c3c-4d01-9ab3-259b5fa602fe] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 21:58:35.732508   86948 system_pods.go:61] "kube-scheduler-newest-cni-007396" [60605366-6b4d-4303-b8a7-c3c29a1440a1] Running
	I0612 21:58:35.732514   86948 system_pods.go:61] "storage-provisioner" [b38936ce-e9eb-4c2f-b92d-8e8bdc8503c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:58:35.732522   86948 system_pods.go:74] duration metric: took 23.566681ms to wait for pod list to return data ...
	I0612 21:58:35.732530   86948 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:58:35.733267   86948 main.go:141] libmachine: Making call to close driver server
	I0612 21:58:35.733296   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Close
	I0612 21:58:35.733634   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Closing plugin on server side
	I0612 21:58:35.733683   86948 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:58:35.733694   86948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:58:35.735779   86948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0612 21:58:35.737305   86948 addons.go:510] duration metric: took 1.187553992s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0612 21:58:35.736173   86948 default_sa.go:45] found service account: "default"
	I0612 21:58:35.737343   86948 default_sa.go:55] duration metric: took 4.806416ms for default service account to be created ...
	I0612 21:58:35.737351   86948 kubeadm.go:576] duration metric: took 1.187662747s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0612 21:58:35.737366   86948 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:58:35.741072   86948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:58:35.741095   86948 node_conditions.go:123] node cpu capacity is 2
	I0612 21:58:35.741105   86948 node_conditions.go:105] duration metric: took 3.73469ms to run NodePressure ...
	I0612 21:58:35.741117   86948 start.go:240] waiting for startup goroutines ...
	I0612 21:58:35.874994   86948 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-007396" context rescaled to 1 replicas
	I0612 21:58:35.875033   86948 start.go:245] waiting for cluster config update ...
	I0612 21:58:35.875045   86948 start.go:254] writing updated cluster config ...
	I0612 21:58:35.875308   86948 ssh_runner.go:195] Run: rm -f paused
	I0612 21:58:35.938166   86948 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:58:35.939681   86948 out.go:177] * Done! kubectl is now configured to use "newest-cni-007396" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.873790535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229522873727712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a70ceb29-191e-464d-8af5-92c1cf35c37a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.874299182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b54b608-0460-4fb1-b91c-0de4201b53ce name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.874347873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b54b608-0460-4fb1-b91c-0de4201b53ce name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.874516368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:456a26e2007c446f05111c29fe257ea55ac9aa4f64390753d7b2ad2aec08420d,PodSandboxId:51de2435b4801fd17d8563f20a98cfd2a187bebf18ad47126402320d254108ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228607686254023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade8816b-866c-4ba3-9665-fc9b144a4286,},Annotations:map[string]string{io.kubernetes.container.hash: 79c17914,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c1641b1f476cfc4f601ec822ff80a9ee8d47cbd60803d9784e1157a907eced,PodSandboxId:3d0f6c409fe1639f34a5852b3f713811cd2a80aafafc80a7afa602a566572d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606844016573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hs7zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8af54bf-17f9-48fe-a770-536c2313bc2a,},Annotations:map[string]string{io.kubernetes.container.hash: b78e6ca9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65c73b186f91091b6b9b4656b546bb3ff54b286a42b23fab99f42b63883d8a3,PodSandboxId:5ca8e42ce9f1f7de993bc78c154a76b39b4926d28b57146f76364daae3fba858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606805588222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
091154b-ef24-4447-b294-03f8d704f37e,},Annotations:map[string]string{io.kubernetes.container.hash: 695657f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafd4118008a016d83fc26ea50f48bb5d65c039c327915423d0a8cd6174e7b9d,PodSandboxId:b211b1234593f06e6206780c967aaf7ac1475d89f3c90f3eef21ff976773aa83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1718228605813606699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l2wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7130c7fb-880b-4a7b-937d-3980c89f217a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ae272a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62652ad7fd20de25e0a440d88237903a2caca55e4e6cfb9eef90f37c716f570b,PodSandboxId:93b31e3e61769df84a73c6ea711ac7b2f265e7808c094481714eacd2190790c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228586377745669,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c528760c1e80f88f75f1e56fecfde584,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7540034e3415b4d9c1685ae0c3b09dc9bfe04a575479cc0eecc567c65c7cce63,PodSandboxId:7d89165e4cb4bca757660b51054d385524f05defa9b920eb6f886fe977078cf9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228586338263388,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83842ac2c4e16e54dde29e303b007929,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:593d39406c63dfef59715265b9658b4b5da66db8584212f23f78bc23f71392a4,PodSandboxId:e701b6df8ab855eaf2e8a20cbf391e93e05fccec112372f23a541d539fe489fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228586338890719,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55dc000dfac3800d39b646c5c11a82c0,},Annotations:map[string]string{io.kubernetes.container.hash: f3eb41bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39dfe322d79671c6df88f6d4c81ccfeb1ea56add7bd86768184df7534f5e86ab,PodSandboxId:5a4a70963c40c75415f5b3dd839d13e4b4ec57d824b48a86d03781919573ccb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228586304584977,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da294bdd0b2d30db40f5d7fa6ca9a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 36ebbbc0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b54b608-0460-4fb1-b91c-0de4201b53ce name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.913745793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0318c36-46ee-46ba-a266-b4057781ec92 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.913870036Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0318c36-46ee-46ba-a266-b4057781ec92 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.915262593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b26f8444-04fd-4a4d-8e52-586616e097f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.915648768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229522915628455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b26f8444-04fd-4a4d-8e52-586616e097f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.916196222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7169f49a-a294-421c-a1b0-86f7158291d6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.916245296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7169f49a-a294-421c-a1b0-86f7158291d6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.916408653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:456a26e2007c446f05111c29fe257ea55ac9aa4f64390753d7b2ad2aec08420d,PodSandboxId:51de2435b4801fd17d8563f20a98cfd2a187bebf18ad47126402320d254108ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228607686254023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade8816b-866c-4ba3-9665-fc9b144a4286,},Annotations:map[string]string{io.kubernetes.container.hash: 79c17914,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c1641b1f476cfc4f601ec822ff80a9ee8d47cbd60803d9784e1157a907eced,PodSandboxId:3d0f6c409fe1639f34a5852b3f713811cd2a80aafafc80a7afa602a566572d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606844016573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hs7zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8af54bf-17f9-48fe-a770-536c2313bc2a,},Annotations:map[string]string{io.kubernetes.container.hash: b78e6ca9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65c73b186f91091b6b9b4656b546bb3ff54b286a42b23fab99f42b63883d8a3,PodSandboxId:5ca8e42ce9f1f7de993bc78c154a76b39b4926d28b57146f76364daae3fba858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606805588222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
091154b-ef24-4447-b294-03f8d704f37e,},Annotations:map[string]string{io.kubernetes.container.hash: 695657f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafd4118008a016d83fc26ea50f48bb5d65c039c327915423d0a8cd6174e7b9d,PodSandboxId:b211b1234593f06e6206780c967aaf7ac1475d89f3c90f3eef21ff976773aa83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1718228605813606699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l2wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7130c7fb-880b-4a7b-937d-3980c89f217a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ae272a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62652ad7fd20de25e0a440d88237903a2caca55e4e6cfb9eef90f37c716f570b,PodSandboxId:93b31e3e61769df84a73c6ea711ac7b2f265e7808c094481714eacd2190790c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228586377745669,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c528760c1e80f88f75f1e56fecfde584,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7540034e3415b4d9c1685ae0c3b09dc9bfe04a575479cc0eecc567c65c7cce63,PodSandboxId:7d89165e4cb4bca757660b51054d385524f05defa9b920eb6f886fe977078cf9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228586338263388,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83842ac2c4e16e54dde29e303b007929,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:593d39406c63dfef59715265b9658b4b5da66db8584212f23f78bc23f71392a4,PodSandboxId:e701b6df8ab855eaf2e8a20cbf391e93e05fccec112372f23a541d539fe489fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228586338890719,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55dc000dfac3800d39b646c5c11a82c0,},Annotations:map[string]string{io.kubernetes.container.hash: f3eb41bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39dfe322d79671c6df88f6d4c81ccfeb1ea56add7bd86768184df7534f5e86ab,PodSandboxId:5a4a70963c40c75415f5b3dd839d13e4b4ec57d824b48a86d03781919573ccb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228586304584977,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da294bdd0b2d30db40f5d7fa6ca9a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 36ebbbc0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7169f49a-a294-421c-a1b0-86f7158291d6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.956485545Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a77bdc5f-4ae4-4d2a-b3be-72b1798f27b3 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.956557668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a77bdc5f-4ae4-4d2a-b3be-72b1798f27b3 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.957711016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9576ee0d-c4f1-4b6f-9fb2-962d2692cf26 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.958683942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229522958651990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9576ee0d-c4f1-4b6f-9fb2-962d2692cf26 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.959200733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d12d1bdf-f271-4266-bf29-baf5acbe9ecc name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.959271548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d12d1bdf-f271-4266-bf29-baf5acbe9ecc name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.959512770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:456a26e2007c446f05111c29fe257ea55ac9aa4f64390753d7b2ad2aec08420d,PodSandboxId:51de2435b4801fd17d8563f20a98cfd2a187bebf18ad47126402320d254108ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228607686254023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade8816b-866c-4ba3-9665-fc9b144a4286,},Annotations:map[string]string{io.kubernetes.container.hash: 79c17914,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c1641b1f476cfc4f601ec822ff80a9ee8d47cbd60803d9784e1157a907eced,PodSandboxId:3d0f6c409fe1639f34a5852b3f713811cd2a80aafafc80a7afa602a566572d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606844016573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hs7zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8af54bf-17f9-48fe-a770-536c2313bc2a,},Annotations:map[string]string{io.kubernetes.container.hash: b78e6ca9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65c73b186f91091b6b9b4656b546bb3ff54b286a42b23fab99f42b63883d8a3,PodSandboxId:5ca8e42ce9f1f7de993bc78c154a76b39b4926d28b57146f76364daae3fba858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606805588222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
091154b-ef24-4447-b294-03f8d704f37e,},Annotations:map[string]string{io.kubernetes.container.hash: 695657f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafd4118008a016d83fc26ea50f48bb5d65c039c327915423d0a8cd6174e7b9d,PodSandboxId:b211b1234593f06e6206780c967aaf7ac1475d89f3c90f3eef21ff976773aa83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1718228605813606699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l2wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7130c7fb-880b-4a7b-937d-3980c89f217a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ae272a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62652ad7fd20de25e0a440d88237903a2caca55e4e6cfb9eef90f37c716f570b,PodSandboxId:93b31e3e61769df84a73c6ea711ac7b2f265e7808c094481714eacd2190790c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228586377745669,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c528760c1e80f88f75f1e56fecfde584,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7540034e3415b4d9c1685ae0c3b09dc9bfe04a575479cc0eecc567c65c7cce63,PodSandboxId:7d89165e4cb4bca757660b51054d385524f05defa9b920eb6f886fe977078cf9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228586338263388,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83842ac2c4e16e54dde29e303b007929,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:593d39406c63dfef59715265b9658b4b5da66db8584212f23f78bc23f71392a4,PodSandboxId:e701b6df8ab855eaf2e8a20cbf391e93e05fccec112372f23a541d539fe489fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228586338890719,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55dc000dfac3800d39b646c5c11a82c0,},Annotations:map[string]string{io.kubernetes.container.hash: f3eb41bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39dfe322d79671c6df88f6d4c81ccfeb1ea56add7bd86768184df7534f5e86ab,PodSandboxId:5a4a70963c40c75415f5b3dd839d13e4b4ec57d824b48a86d03781919573ccb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228586304584977,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da294bdd0b2d30db40f5d7fa6ca9a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 36ebbbc0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d12d1bdf-f271-4266-bf29-baf5acbe9ecc name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.996878056Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5797de47-a640-4b1a-870b-146bf64fd08e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:42 embed-certs-591460 crio[726]: time="2024-06-12 21:58:42.996966671Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5797de47-a640-4b1a-870b-146bf64fd08e name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:43 embed-certs-591460 crio[726]: time="2024-06-12 21:58:43.002497334Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6aa3eca1-fa59-420d-a897-91592787edef name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:43 embed-certs-591460 crio[726]: time="2024-06-12 21:58:43.002907541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229523002885557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6aa3eca1-fa59-420d-a897-91592787edef name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:43 embed-certs-591460 crio[726]: time="2024-06-12 21:58:43.003400816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de6577a4-407a-43a8-b0bd-ca9063b26c1c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:43 embed-certs-591460 crio[726]: time="2024-06-12 21:58:43.003479198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de6577a4-407a-43a8-b0bd-ca9063b26c1c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:43 embed-certs-591460 crio[726]: time="2024-06-12 21:58:43.003642534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:456a26e2007c446f05111c29fe257ea55ac9aa4f64390753d7b2ad2aec08420d,PodSandboxId:51de2435b4801fd17d8563f20a98cfd2a187bebf18ad47126402320d254108ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228607686254023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ade8816b-866c-4ba3-9665-fc9b144a4286,},Annotations:map[string]string{io.kubernetes.container.hash: 79c17914,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c1641b1f476cfc4f601ec822ff80a9ee8d47cbd60803d9784e1157a907eced,PodSandboxId:3d0f6c409fe1639f34a5852b3f713811cd2a80aafafc80a7afa602a566572d6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606844016573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hs7zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8af54bf-17f9-48fe-a770-536c2313bc2a,},Annotations:map[string]string{io.kubernetes.container.hash: b78e6ca9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65c73b186f91091b6b9b4656b546bb3ff54b286a42b23fab99f42b63883d8a3,PodSandboxId:5ca8e42ce9f1f7de993bc78c154a76b39b4926d28b57146f76364daae3fba858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228606805588222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpf5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
091154b-ef24-4447-b294-03f8d704f37e,},Annotations:map[string]string{io.kubernetes.container.hash: 695657f9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafd4118008a016d83fc26ea50f48bb5d65c039c327915423d0a8cd6174e7b9d,PodSandboxId:b211b1234593f06e6206780c967aaf7ac1475d89f3c90f3eef21ff976773aa83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt
:1718228605813606699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l2wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7130c7fb-880b-4a7b-937d-3980c89f217a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ae272a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62652ad7fd20de25e0a440d88237903a2caca55e4e6cfb9eef90f37c716f570b,PodSandboxId:93b31e3e61769df84a73c6ea711ac7b2f265e7808c094481714eacd2190790c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228586377745669,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c528760c1e80f88f75f1e56fecfde584,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7540034e3415b4d9c1685ae0c3b09dc9bfe04a575479cc0eecc567c65c7cce63,PodSandboxId:7d89165e4cb4bca757660b51054d385524f05defa9b920eb6f886fe977078cf9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228586338263388,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83842ac2c4e16e54dde29e303b007929,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:593d39406c63dfef59715265b9658b4b5da66db8584212f23f78bc23f71392a4,PodSandboxId:e701b6df8ab855eaf2e8a20cbf391e93e05fccec112372f23a541d539fe489fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228586338890719,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55dc000dfac3800d39b646c5c11a82c0,},Annotations:map[string]string{io.kubernetes.container.hash: f3eb41bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39dfe322d79671c6df88f6d4c81ccfeb1ea56add7bd86768184df7534f5e86ab,PodSandboxId:5a4a70963c40c75415f5b3dd839d13e4b4ec57d824b48a86d03781919573ccb3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228586304584977,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-591460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da294bdd0b2d30db40f5d7fa6ca9a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 36ebbbc0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de6577a4-407a-43a8-b0bd-ca9063b26c1c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	456a26e2007c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   51de2435b4801       storage-provisioner
	77c1641b1f476       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   3d0f6c409fe16       coredns-7db6d8ff4d-hs7zn
	f65c73b186f91       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   5ca8e42ce9f1f       coredns-7db6d8ff4d-fpf5q
	cafd4118008a0       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   15 minutes ago      Running             kube-proxy                0                   b211b1234593f       kube-proxy-5l2wz
	62652ad7fd20d       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   15 minutes ago      Running             kube-scheduler            2                   93b31e3e61769       kube-scheduler-embed-certs-591460
	593d39406c63d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   e701b6df8ab85       etcd-embed-certs-591460
	7540034e3415b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   15 minutes ago      Running             kube-controller-manager   2                   7d89165e4cb4b       kube-controller-manager-embed-certs-591460
	39dfe322d7967       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   15 minutes ago      Running             kube-apiserver            2                   5a4a70963c40c       kube-apiserver-embed-certs-591460
	
	
	==> coredns [77c1641b1f476cfc4f601ec822ff80a9ee8d47cbd60803d9784e1157a907eced] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f65c73b186f91091b6b9b4656b546bb3ff54b286a42b23fab99f42b63883d8a3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-591460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-591460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=embed-certs-591460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T21_43_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:43:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-591460
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:58:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:53:44 +0000   Wed, 12 Jun 2024 21:43:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:53:44 +0000   Wed, 12 Jun 2024 21:43:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:53:44 +0000   Wed, 12 Jun 2024 21:43:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:53:44 +0000   Wed, 12 Jun 2024 21:43:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    embed-certs-591460
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 be2a1b8c15954fe4a88099a11e94a7f9
	  System UUID:                be2a1b8c-1595-4fe4-a880-99a11e94a7f9
	  Boot ID:                    1230b539-0b4f-433c-aa97-d3b198afe346
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fpf5q                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-hs7zn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-591460                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-591460             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-591460    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-5l2wz                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-591460             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-r7fbt               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-591460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-591460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-591460 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-591460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-591460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-591460 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-591460 event: Registered Node embed-certs-591460 in Controller
	
	
	==> dmesg <==
	[  +0.052436] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042214] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.641743] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.449793] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.631824] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun12 21:38] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.058959] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059049] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.211065] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.140132] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.318457] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.609269] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.067804] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.109318] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +4.646862] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.754819] kauditd_printk_skb: 79 callbacks suppressed
	[Jun12 21:43] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.467052] systemd-fstab-generator[3574]: Ignoring "noauto" option for root device
	[  +4.541687] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.009944] systemd-fstab-generator[3893]: Ignoring "noauto" option for root device
	[ +13.880944] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[  +0.107272] kauditd_printk_skb: 14 callbacks suppressed
	[Jun12 21:44] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [593d39406c63dfef59715265b9658b4b5da66db8584212f23f78bc23f71392a4] <==
	{"level":"info","ts":"2024-06-12T21:43:06.757944Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-12T21:43:07.656928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-12T21:43:07.657026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-12T21:43:07.657134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d received MsgPreVoteResp from c194f0f1585e7a7d at term 1"}
	{"level":"info","ts":"2024-06-12T21:43:07.657165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became candidate at term 2"}
	{"level":"info","ts":"2024-06-12T21:43:07.657188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d received MsgVoteResp from c194f0f1585e7a7d at term 2"}
	{"level":"info","ts":"2024-06-12T21:43:07.657215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became leader at term 2"}
	{"level":"info","ts":"2024-06-12T21:43:07.65724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c194f0f1585e7a7d elected leader c194f0f1585e7a7d at term 2"}
	{"level":"info","ts":"2024-06-12T21:43:07.661292Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:43:07.662185Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c194f0f1585e7a7d","local-member-attributes":"{Name:embed-certs-591460 ClientURLs:[https://192.168.39.147:2379]}","request-path":"/0/members/c194f0f1585e7a7d/attributes","cluster-id":"582b8c8375119e1d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-12T21:43:07.66231Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:43:07.662363Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:43:07.668593Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-12T21:43:07.670347Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.147:2379"}
	{"level":"info","ts":"2024-06-12T21:43:07.675076Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T21:43:07.702095Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-12T21:43:07.687453Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"582b8c8375119e1d","local-member-id":"c194f0f1585e7a7d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:43:07.702227Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:43:07.702278Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:53:07.714896Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":716}
	{"level":"info","ts":"2024-06-12T21:53:07.724266Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":716,"took":"8.705372ms","hash":1092485163,"current-db-size-bytes":2162688,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2162688,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-06-12T21:53:07.724367Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1092485163,"revision":716,"compact-revision":-1}
	{"level":"info","ts":"2024-06-12T21:58:07.722903Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2024-06-12T21:58:07.727548Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":959,"took":"3.932625ms","hash":3883892023,"current-db-size-bytes":2162688,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1531904,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-06-12T21:58:07.727656Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3883892023,"revision":959,"compact-revision":716}
	
	
	==> kernel <==
	 21:58:43 up 20 min,  0 users,  load average: 0.09, 0.07, 0.08
	Linux embed-certs-591460 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [39dfe322d79671c6df88f6d4c81ccfeb1ea56add7bd86768184df7534f5e86ab] <==
	I0612 21:53:10.122771       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:54:10.121312       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:54:10.121396       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:54:10.121409       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:54:10.123743       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:54:10.123805       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:54:10.123813       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:56:10.121904       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:56:10.121993       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:56:10.122006       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:56:10.124337       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:56:10.124446       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:56:10.124455       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:58:09.127242       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:58:09.127613       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0612 21:58:10.128202       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:58:10.128253       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:58:10.128261       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:58:10.128347       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:58:10.128464       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:58:10.129413       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7540034e3415b4d9c1685ae0c3b09dc9bfe04a575479cc0eecc567c65c7cce63] <==
	I0612 21:52:55.092089       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:53:24.602814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:53:25.105921       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:53:54.608518       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:53:55.113594       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:54:24.614893       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:54:25.009176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="329.005µs"
	I0612 21:54:25.122822       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0612 21:54:37.006884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="53.557µs"
	E0612 21:54:54.620909       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:54:55.130178       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:55:24.626876       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:55:25.141893       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:55:54.633361       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:55:55.150747       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:56:24.639289       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:56:25.159886       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:56:54.644821       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:56:55.168941       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:57:24.649672       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:57:25.180010       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:57:54.657879       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:57:55.189449       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:58:24.663924       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:58:25.198391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [cafd4118008a016d83fc26ea50f48bb5d65c039c327915423d0a8cd6174e7b9d] <==
	I0612 21:43:26.187865       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:43:26.220739       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.147"]
	I0612 21:43:26.297603       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:43:26.297650       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:43:26.297671       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:43:26.302762       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:43:26.302932       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:43:26.302963       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:43:26.306612       1 config.go:192] "Starting service config controller"
	I0612 21:43:26.306628       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:43:26.306647       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:43:26.306651       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:43:26.306966       1 config.go:319] "Starting node config controller"
	I0612 21:43:26.306972       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:43:26.407258       1 shared_informer.go:320] Caches are synced for node config
	I0612 21:43:26.407287       1 shared_informer.go:320] Caches are synced for service config
	I0612 21:43:26.407340       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [62652ad7fd20de25e0a440d88237903a2caca55e4e6cfb9eef90f37c716f570b] <==
	W0612 21:43:09.168134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 21:43:09.171223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 21:43:09.168189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 21:43:09.171361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 21:43:09.171535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 21:43:09.171655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 21:43:10.003655       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0612 21:43:10.003771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0612 21:43:10.016281       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 21:43:10.016408       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 21:43:10.061076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 21:43:10.061163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 21:43:10.073551       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 21:43:10.073691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0612 21:43:10.176353       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 21:43:10.176642       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 21:43:10.226734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 21:43:10.227184       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 21:43:10.257174       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0612 21:43:10.257541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0612 21:43:10.367564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0612 21:43:10.367898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0612 21:43:10.401530       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0612 21:43:10.401665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 21:43:11.842096       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:56:12 embed-certs-591460 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:56:12 embed-certs-591460 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:56:13 embed-certs-591460 kubelet[3900]: E0612 21:56:13.991367    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:56:27 embed-certs-591460 kubelet[3900]: E0612 21:56:27.992191    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:56:42 embed-certs-591460 kubelet[3900]: E0612 21:56:42.990597    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:56:55 embed-certs-591460 kubelet[3900]: E0612 21:56:55.991634    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:57:06 embed-certs-591460 kubelet[3900]: E0612 21:57:06.990165    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:57:12 embed-certs-591460 kubelet[3900]: E0612 21:57:12.005485    3900 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:57:12 embed-certs-591460 kubelet[3900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:57:12 embed-certs-591460 kubelet[3900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:57:12 embed-certs-591460 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:57:12 embed-certs-591460 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:57:19 embed-certs-591460 kubelet[3900]: E0612 21:57:19.991276    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:57:31 embed-certs-591460 kubelet[3900]: E0612 21:57:31.991249    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:57:44 embed-certs-591460 kubelet[3900]: E0612 21:57:44.991129    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:57:55 embed-certs-591460 kubelet[3900]: E0612 21:57:55.990689    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:58:07 embed-certs-591460 kubelet[3900]: E0612 21:58:07.991241    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:58:12 embed-certs-591460 kubelet[3900]: E0612 21:58:12.006215    3900 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:58:12 embed-certs-591460 kubelet[3900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:58:12 embed-certs-591460 kubelet[3900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:58:12 embed-certs-591460 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:58:12 embed-certs-591460 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:58:18 embed-certs-591460 kubelet[3900]: E0612 21:58:18.991183    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:58:29 embed-certs-591460 kubelet[3900]: E0612 21:58:29.992887    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	Jun 12 21:58:40 embed-certs-591460 kubelet[3900]: E0612 21:58:40.990406    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-r7fbt" podUID="e33a1ff8-3032-4be5-8b6a-3eedfbb92611"
	
	
	==> storage-provisioner [456a26e2007c446f05111c29fe257ea55ac9aa4f64390753d7b2ad2aec08420d] <==
	I0612 21:43:27.793337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0612 21:43:27.807553       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0612 21:43:27.807656       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0612 21:43:27.819125       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0612 21:43:27.819624       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-591460_838604cf-6703-4879-a7a7-57d5015a543a!
	I0612 21:43:27.824439       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"142195aa-ac84-4e90-b8a3-6644b794cbbe", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-591460_838604cf-6703-4879-a7a7-57d5015a543a became leader
	I0612 21:43:27.921158       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-591460_838604cf-6703-4879-a7a7-57d5015a543a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-591460 -n embed-certs-591460
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-591460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-r7fbt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-591460 describe pod metrics-server-569cc877fc-r7fbt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-591460 describe pod metrics-server-569cc877fc-r7fbt: exit status 1 (63.640161ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-r7fbt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-591460 describe pod metrics-server-569cc877fc-r7fbt: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (369.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (280.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-087875 -n no-preload-087875
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-12 21:58:04.890118339 +0000 UTC m=+6436.504568719
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-087875 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-087875 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.61µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-087875 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-087875 -n no-preload-087875
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-087875 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-087875 logs -n 25: (1.512505512s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-576552 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | disable-driver-mounts-576552                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-087875             | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-376087  | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-591460            | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-983302        | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-087875                  | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-376087       | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:42 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-591460                 | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-983302             | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:57 UTC | 12 Jun 24 21:57 UTC |
	| start   | -p newest-cni-007396 --memory=2200 --alsologtostderr   | newest-cni-007396            | jenkins | v1.33.1 | 12 Jun 24 21:57 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:57:39
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:57:39.550876   86948 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:57:39.551091   86948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:57:39.551099   86948 out.go:304] Setting ErrFile to fd 2...
	I0612 21:57:39.551103   86948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:57:39.551305   86948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:57:39.551845   86948 out.go:298] Setting JSON to false
	I0612 21:57:39.552797   86948 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9605,"bootTime":1718219855,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:57:39.552852   86948 start.go:139] virtualization: kvm guest
	I0612 21:57:39.555092   86948 out.go:177] * [newest-cni-007396] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:57:39.556394   86948 notify.go:220] Checking for updates...
	I0612 21:57:39.556401   86948 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:57:39.557868   86948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:57:39.559183   86948 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:57:39.560464   86948 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:57:39.561707   86948 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:57:39.562862   86948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:57:39.564433   86948 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:57:39.564581   86948 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:57:39.564673   86948 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:57:39.564757   86948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:57:39.602527   86948 out.go:177] * Using the kvm2 driver based on user configuration
	I0612 21:57:39.603758   86948 start.go:297] selected driver: kvm2
	I0612 21:57:39.603773   86948 start.go:901] validating driver "kvm2" against <nil>
	I0612 21:57:39.603791   86948 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:57:39.604500   86948 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:57:39.604557   86948 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:57:39.619433   86948 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:57:39.619484   86948 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0612 21:57:39.619509   86948 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0612 21:57:39.619809   86948 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0612 21:57:39.619881   86948 cni.go:84] Creating CNI manager for ""
	I0612 21:57:39.619898   86948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:57:39.619906   86948 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 21:57:39.619980   86948 start.go:340] cluster config:
	{Name:newest-cni-007396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:57:39.620120   86948 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:57:39.622163   86948 out.go:177] * Starting "newest-cni-007396" primary control-plane node in "newest-cni-007396" cluster
	I0612 21:57:39.623198   86948 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:57:39.623233   86948 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 21:57:39.623239   86948 cache.go:56] Caching tarball of preloaded images
	I0612 21:57:39.623306   86948 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:57:39.623317   86948 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 21:57:39.623400   86948 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/config.json ...
	I0612 21:57:39.623415   86948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/config.json: {Name:mkddd57eb5daa435dc3b365b712f5a3c8140a077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:57:39.623523   86948 start.go:360] acquireMachinesLock for newest-cni-007396: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:57:39.623548   86948 start.go:364] duration metric: took 14.312µs to acquireMachinesLock for "newest-cni-007396"
	I0612 21:57:39.623561   86948 start.go:93] Provisioning new machine with config: &{Name:newest-cni-007396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-007396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:57:39.623612   86948 start.go:125] createHost starting for "" (driver="kvm2")
	I0612 21:57:39.625081   86948 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 21:57:39.625187   86948 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:57:39.625223   86948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:57:39.639278   86948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0612 21:57:39.639724   86948 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:57:39.640265   86948 main.go:141] libmachine: Using API Version  1
	I0612 21:57:39.640286   86948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:57:39.640560   86948 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:57:39.640759   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:57:39.640954   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:57:39.641113   86948 start.go:159] libmachine.API.Create for "newest-cni-007396" (driver="kvm2")
	I0612 21:57:39.641148   86948 client.go:168] LocalClient.Create starting
	I0612 21:57:39.641174   86948 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem
	I0612 21:57:39.641200   86948 main.go:141] libmachine: Decoding PEM data...
	I0612 21:57:39.641212   86948 main.go:141] libmachine: Parsing certificate...
	I0612 21:57:39.641270   86948 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem
	I0612 21:57:39.641290   86948 main.go:141] libmachine: Decoding PEM data...
	I0612 21:57:39.641303   86948 main.go:141] libmachine: Parsing certificate...
	I0612 21:57:39.641319   86948 main.go:141] libmachine: Running pre-create checks...
	I0612 21:57:39.641327   86948 main.go:141] libmachine: (newest-cni-007396) Calling .PreCreateCheck
	I0612 21:57:39.641700   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetConfigRaw
	I0612 21:57:39.642164   86948 main.go:141] libmachine: Creating machine...
	I0612 21:57:39.642181   86948 main.go:141] libmachine: (newest-cni-007396) Calling .Create
	I0612 21:57:39.642316   86948 main.go:141] libmachine: (newest-cni-007396) Creating KVM machine...
	I0612 21:57:39.643669   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found existing default KVM network
	I0612 21:57:39.644988   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.644853   86970 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b9:6b:ca} reservation:<nil>}
	I0612 21:57:39.645969   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.645912   86970 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002b8150}
	I0612 21:57:39.646019   86948 main.go:141] libmachine: (newest-cni-007396) DBG | created network xml: 
	I0612 21:57:39.646043   86948 main.go:141] libmachine: (newest-cni-007396) DBG | <network>
	I0612 21:57:39.646054   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   <name>mk-newest-cni-007396</name>
	I0612 21:57:39.646066   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   <dns enable='no'/>
	I0612 21:57:39.646074   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   
	I0612 21:57:39.646080   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0612 21:57:39.646086   86948 main.go:141] libmachine: (newest-cni-007396) DBG |     <dhcp>
	I0612 21:57:39.646094   86948 main.go:141] libmachine: (newest-cni-007396) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0612 21:57:39.646102   86948 main.go:141] libmachine: (newest-cni-007396) DBG |     </dhcp>
	I0612 21:57:39.646109   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   </ip>
	I0612 21:57:39.646115   86948 main.go:141] libmachine: (newest-cni-007396) DBG |   
	I0612 21:57:39.646125   86948 main.go:141] libmachine: (newest-cni-007396) DBG | </network>
	I0612 21:57:39.646152   86948 main.go:141] libmachine: (newest-cni-007396) DBG | 
	I0612 21:57:39.652264   86948 main.go:141] libmachine: (newest-cni-007396) DBG | trying to create private KVM network mk-newest-cni-007396 192.168.50.0/24...
	I0612 21:57:39.722112   86948 main.go:141] libmachine: (newest-cni-007396) DBG | private KVM network mk-newest-cni-007396 192.168.50.0/24 created
	I0612 21:57:39.722210   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.722103   86970 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:57:39.722240   86948 main.go:141] libmachine: (newest-cni-007396) Setting up store path in /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396 ...
	I0612 21:57:39.722309   86948 main.go:141] libmachine: (newest-cni-007396) Building disk image from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 21:57:39.722340   86948 main.go:141] libmachine: (newest-cni-007396) Downloading /home/jenkins/minikube-integration/17779-14199/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0612 21:57:39.949912   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:39.949748   86970 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa...
	I0612 21:57:40.367958   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:40.367803   86970 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/newest-cni-007396.rawdisk...
	I0612 21:57:40.367993   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Writing magic tar header
	I0612 21:57:40.368005   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Writing SSH key tar header
	I0612 21:57:40.368014   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:40.367917   86970 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396 ...
	I0612 21:57:40.368030   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396
	I0612 21:57:40.368039   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube/machines
	I0612 21:57:40.368052   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396 (perms=drwx------)
	I0612 21:57:40.368066   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube/machines (perms=drwxr-xr-x)
	I0612 21:57:40.368080   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199/.minikube (perms=drwxr-xr-x)
	I0612 21:57:40.368097   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration/17779-14199 (perms=drwxrwxr-x)
	I0612 21:57:40.368106   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0612 21:57:40.368143   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:57:40.368168   86948 main.go:141] libmachine: (newest-cni-007396) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0612 21:57:40.368175   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17779-14199
	I0612 21:57:40.368184   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0612 21:57:40.368191   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home/jenkins
	I0612 21:57:40.368216   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Checking permissions on dir: /home
	I0612 21:57:40.368230   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Skipping /home - not owner
	I0612 21:57:40.368244   86948 main.go:141] libmachine: (newest-cni-007396) Creating domain...
	I0612 21:57:40.369412   86948 main.go:141] libmachine: (newest-cni-007396) define libvirt domain using xml: 
	I0612 21:57:40.369429   86948 main.go:141] libmachine: (newest-cni-007396) <domain type='kvm'>
	I0612 21:57:40.369436   86948 main.go:141] libmachine: (newest-cni-007396)   <name>newest-cni-007396</name>
	I0612 21:57:40.369441   86948 main.go:141] libmachine: (newest-cni-007396)   <memory unit='MiB'>2200</memory>
	I0612 21:57:40.369447   86948 main.go:141] libmachine: (newest-cni-007396)   <vcpu>2</vcpu>
	I0612 21:57:40.369455   86948 main.go:141] libmachine: (newest-cni-007396)   <features>
	I0612 21:57:40.369463   86948 main.go:141] libmachine: (newest-cni-007396)     <acpi/>
	I0612 21:57:40.369474   86948 main.go:141] libmachine: (newest-cni-007396)     <apic/>
	I0612 21:57:40.369483   86948 main.go:141] libmachine: (newest-cni-007396)     <pae/>
	I0612 21:57:40.369495   86948 main.go:141] libmachine: (newest-cni-007396)     
	I0612 21:57:40.369504   86948 main.go:141] libmachine: (newest-cni-007396)   </features>
	I0612 21:57:40.369520   86948 main.go:141] libmachine: (newest-cni-007396)   <cpu mode='host-passthrough'>
	I0612 21:57:40.369553   86948 main.go:141] libmachine: (newest-cni-007396)   
	I0612 21:57:40.369579   86948 main.go:141] libmachine: (newest-cni-007396)   </cpu>
	I0612 21:57:40.369590   86948 main.go:141] libmachine: (newest-cni-007396)   <os>
	I0612 21:57:40.369597   86948 main.go:141] libmachine: (newest-cni-007396)     <type>hvm</type>
	I0612 21:57:40.369622   86948 main.go:141] libmachine: (newest-cni-007396)     <boot dev='cdrom'/>
	I0612 21:57:40.369631   86948 main.go:141] libmachine: (newest-cni-007396)     <boot dev='hd'/>
	I0612 21:57:40.369636   86948 main.go:141] libmachine: (newest-cni-007396)     <bootmenu enable='no'/>
	I0612 21:57:40.369643   86948 main.go:141] libmachine: (newest-cni-007396)   </os>
	I0612 21:57:40.369650   86948 main.go:141] libmachine: (newest-cni-007396)   <devices>
	I0612 21:57:40.369668   86948 main.go:141] libmachine: (newest-cni-007396)     <disk type='file' device='cdrom'>
	I0612 21:57:40.369685   86948 main.go:141] libmachine: (newest-cni-007396)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/boot2docker.iso'/>
	I0612 21:57:40.369701   86948 main.go:141] libmachine: (newest-cni-007396)       <target dev='hdc' bus='scsi'/>
	I0612 21:57:40.369713   86948 main.go:141] libmachine: (newest-cni-007396)       <readonly/>
	I0612 21:57:40.369719   86948 main.go:141] libmachine: (newest-cni-007396)     </disk>
	I0612 21:57:40.369725   86948 main.go:141] libmachine: (newest-cni-007396)     <disk type='file' device='disk'>
	I0612 21:57:40.369734   86948 main.go:141] libmachine: (newest-cni-007396)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0612 21:57:40.369766   86948 main.go:141] libmachine: (newest-cni-007396)       <source file='/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/newest-cni-007396.rawdisk'/>
	I0612 21:57:40.369805   86948 main.go:141] libmachine: (newest-cni-007396)       <target dev='hda' bus='virtio'/>
	I0612 21:57:40.369819   86948 main.go:141] libmachine: (newest-cni-007396)     </disk>
	I0612 21:57:40.369832   86948 main.go:141] libmachine: (newest-cni-007396)     <interface type='network'>
	I0612 21:57:40.369846   86948 main.go:141] libmachine: (newest-cni-007396)       <source network='mk-newest-cni-007396'/>
	I0612 21:57:40.369857   86948 main.go:141] libmachine: (newest-cni-007396)       <model type='virtio'/>
	I0612 21:57:40.369868   86948 main.go:141] libmachine: (newest-cni-007396)     </interface>
	I0612 21:57:40.369884   86948 main.go:141] libmachine: (newest-cni-007396)     <interface type='network'>
	I0612 21:57:40.369900   86948 main.go:141] libmachine: (newest-cni-007396)       <source network='default'/>
	I0612 21:57:40.369911   86948 main.go:141] libmachine: (newest-cni-007396)       <model type='virtio'/>
	I0612 21:57:40.369918   86948 main.go:141] libmachine: (newest-cni-007396)     </interface>
	I0612 21:57:40.369927   86948 main.go:141] libmachine: (newest-cni-007396)     <serial type='pty'>
	I0612 21:57:40.369935   86948 main.go:141] libmachine: (newest-cni-007396)       <target port='0'/>
	I0612 21:57:40.369947   86948 main.go:141] libmachine: (newest-cni-007396)     </serial>
	I0612 21:57:40.369954   86948 main.go:141] libmachine: (newest-cni-007396)     <console type='pty'>
	I0612 21:57:40.369967   86948 main.go:141] libmachine: (newest-cni-007396)       <target type='serial' port='0'/>
	I0612 21:57:40.369977   86948 main.go:141] libmachine: (newest-cni-007396)     </console>
	I0612 21:57:40.369986   86948 main.go:141] libmachine: (newest-cni-007396)     <rng model='virtio'>
	I0612 21:57:40.369995   86948 main.go:141] libmachine: (newest-cni-007396)       <backend model='random'>/dev/random</backend>
	I0612 21:57:40.370002   86948 main.go:141] libmachine: (newest-cni-007396)     </rng>
	I0612 21:57:40.370016   86948 main.go:141] libmachine: (newest-cni-007396)     
	I0612 21:57:40.370026   86948 main.go:141] libmachine: (newest-cni-007396)     
	I0612 21:57:40.370036   86948 main.go:141] libmachine: (newest-cni-007396)   </devices>
	I0612 21:57:40.370046   86948 main.go:141] libmachine: (newest-cni-007396) </domain>
	I0612 21:57:40.370060   86948 main.go:141] libmachine: (newest-cni-007396) 
	I0612 21:57:40.374484   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:ac:61:40 in network default
	I0612 21:57:40.375055   86948 main.go:141] libmachine: (newest-cni-007396) Ensuring networks are active...
	I0612 21:57:40.375074   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:40.375755   86948 main.go:141] libmachine: (newest-cni-007396) Ensuring network default is active
	I0612 21:57:40.376055   86948 main.go:141] libmachine: (newest-cni-007396) Ensuring network mk-newest-cni-007396 is active
	I0612 21:57:40.376588   86948 main.go:141] libmachine: (newest-cni-007396) Getting domain xml...
	I0612 21:57:40.377311   86948 main.go:141] libmachine: (newest-cni-007396) Creating domain...
	I0612 21:57:41.646694   86948 main.go:141] libmachine: (newest-cni-007396) Waiting to get IP...
	I0612 21:57:41.647535   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:41.647983   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:41.648009   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:41.647967   86970 retry.go:31] will retry after 232.64418ms: waiting for machine to come up
	I0612 21:57:41.882517   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:41.883132   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:41.883162   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:41.883063   86970 retry.go:31] will retry after 300.678306ms: waiting for machine to come up
	I0612 21:57:42.185385   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:42.185837   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:42.185867   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:42.185788   86970 retry.go:31] will retry after 322.355198ms: waiting for machine to come up
	I0612 21:57:42.509318   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:42.509851   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:42.509874   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:42.509823   86970 retry.go:31] will retry after 383.48604ms: waiting for machine to come up
	I0612 21:57:42.895499   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:42.896051   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:42.896083   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:42.896000   86970 retry.go:31] will retry after 681.668123ms: waiting for machine to come up
	I0612 21:57:43.579089   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:43.579655   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:43.579692   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:43.579608   86970 retry.go:31] will retry after 772.173706ms: waiting for machine to come up
	I0612 21:57:44.353493   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:44.353942   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:44.353965   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:44.353889   86970 retry.go:31] will retry after 1.081187064s: waiting for machine to come up
	I0612 21:57:45.436451   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:45.436949   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:45.436977   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:45.436901   86970 retry.go:31] will retry after 1.312080042s: waiting for machine to come up
	I0612 21:57:46.751288   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:46.751800   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:46.751823   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:46.751758   86970 retry.go:31] will retry after 1.211250846s: waiting for machine to come up
	I0612 21:57:47.964813   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:47.965255   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:47.965280   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:47.965195   86970 retry.go:31] will retry after 1.673381258s: waiting for machine to come up
	I0612 21:57:49.640173   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:49.640641   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:49.640664   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:49.640609   86970 retry.go:31] will retry after 1.995026566s: waiting for machine to come up
	I0612 21:57:51.638102   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:51.638614   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:51.638639   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:51.638561   86970 retry.go:31] will retry after 3.197679013s: waiting for machine to come up
	I0612 21:57:54.837509   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:54.838000   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:54.838028   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:54.837956   86970 retry.go:31] will retry after 3.462181977s: waiting for machine to come up
	I0612 21:57:58.304412   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:57:58.304897   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find current IP address of domain newest-cni-007396 in network mk-newest-cni-007396
	I0612 21:57:58.304931   86948 main.go:141] libmachine: (newest-cni-007396) DBG | I0612 21:57:58.304819   86970 retry.go:31] will retry after 3.755357309s: waiting for machine to come up
	I0612 21:58:02.062774   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.063322   86948 main.go:141] libmachine: (newest-cni-007396) Found IP for machine: 192.168.50.207
	I0612 21:58:02.063351   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has current primary IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.063381   86948 main.go:141] libmachine: (newest-cni-007396) Reserving static IP address...
	I0612 21:58:02.063736   86948 main.go:141] libmachine: (newest-cni-007396) DBG | unable to find host DHCP lease matching {name: "newest-cni-007396", mac: "52:54:00:a5:e1:fb", ip: "192.168.50.207"} in network mk-newest-cni-007396
	I0612 21:58:02.146932   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Getting to WaitForSSH function...
	I0612 21:58:02.146965   86948 main.go:141] libmachine: (newest-cni-007396) Reserved static IP address: 192.168.50.207
	I0612 21:58:02.146979   86948 main.go:141] libmachine: (newest-cni-007396) Waiting for SSH to be available...
	I0612 21:58:02.149790   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.150289   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.150323   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.150483   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Using SSH client type: external
	I0612 21:58:02.150512   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa (-rw-------)
	I0612 21:58:02.150548   86948 main.go:141] libmachine: (newest-cni-007396) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:58:02.150565   86948 main.go:141] libmachine: (newest-cni-007396) DBG | About to run SSH command:
	I0612 21:58:02.150580   86948 main.go:141] libmachine: (newest-cni-007396) DBG | exit 0
	I0612 21:58:02.279618   86948 main.go:141] libmachine: (newest-cni-007396) DBG | SSH cmd err, output: <nil>: 
	I0612 21:58:02.279899   86948 main.go:141] libmachine: (newest-cni-007396) KVM machine creation complete!
	I0612 21:58:02.280217   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetConfigRaw
	I0612 21:58:02.280700   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:02.280886   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:02.281060   86948 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0612 21:58:02.281077   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetState
	I0612 21:58:02.282541   86948 main.go:141] libmachine: Detecting operating system of created instance...
	I0612 21:58:02.282554   86948 main.go:141] libmachine: Waiting for SSH to be available...
	I0612 21:58:02.282560   86948 main.go:141] libmachine: Getting to WaitForSSH function...
	I0612 21:58:02.282566   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.285113   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.285505   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.285535   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.285681   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.285880   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.286029   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.286215   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.286406   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.286581   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.286594   86948 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0612 21:58:02.394673   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:58:02.394702   86948 main.go:141] libmachine: Detecting the provisioner...
	I0612 21:58:02.394714   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.397514   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.397799   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.397821   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.397989   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.398190   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.398390   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.398545   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.398715   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.398921   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.398932   86948 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0612 21:58:02.504115   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0612 21:58:02.504176   86948 main.go:141] libmachine: found compatible host: buildroot
	I0612 21:58:02.504183   86948 main.go:141] libmachine: Provisioning with buildroot...
	I0612 21:58:02.504190   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:58:02.504433   86948 buildroot.go:166] provisioning hostname "newest-cni-007396"
	I0612 21:58:02.504459   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:58:02.504702   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.508127   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.508526   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.508555   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.508732   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.508920   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.509065   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.509177   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.509332   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.509586   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.509607   86948 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-007396 && echo "newest-cni-007396" | sudo tee /etc/hostname
	I0612 21:58:02.630796   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-007396
	
	I0612 21:58:02.630828   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.633959   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.634507   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.634545   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.634710   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.634901   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.635104   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.635310   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.635497   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:02.635697   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:02.635723   86948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-007396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-007396/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-007396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:58:02.754971   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:58:02.755003   86948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:58:02.755025   86948 buildroot.go:174] setting up certificates
	I0612 21:58:02.755037   86948 provision.go:84] configureAuth start
	I0612 21:58:02.755049   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetMachineName
	I0612 21:58:02.755367   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetIP
	I0612 21:58:02.757918   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.758342   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.758374   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.758471   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.761085   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.761409   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.761437   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.761582   86948 provision.go:143] copyHostCerts
	I0612 21:58:02.761670   86948 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:58:02.761680   86948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:58:02.761744   86948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:58:02.761842   86948 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:58:02.761850   86948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:58:02.761872   86948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:58:02.761932   86948 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:58:02.761939   86948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:58:02.761959   86948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:58:02.762037   86948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.newest-cni-007396 san=[127.0.0.1 192.168.50.207 localhost minikube newest-cni-007396]
	I0612 21:58:02.983584   86948 provision.go:177] copyRemoteCerts
	I0612 21:58:02.983643   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:58:02.983665   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:02.986420   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.986728   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:02.986767   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:02.986935   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:02.987149   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:02.987356   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:02.987507   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.069906   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0612 21:58:03.095863   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:58:03.124797   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:58:03.149919   86948 provision.go:87] duration metric: took 394.869081ms to configureAuth
	I0612 21:58:03.149945   86948 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:58:03.150170   86948 config.go:182] Loaded profile config "newest-cni-007396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:58:03.150272   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.153322   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.153699   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.153737   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.153974   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.154243   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.154441   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.154623   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.154845   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:03.154995   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:03.155009   86948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:58:03.430020   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:58:03.430053   86948 main.go:141] libmachine: Checking connection to Docker...
	I0612 21:58:03.430064   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetURL
	I0612 21:58:03.431420   86948 main.go:141] libmachine: (newest-cni-007396) DBG | Using libvirt version 6000000
	I0612 21:58:03.433660   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.434051   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.434083   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.434223   86948 main.go:141] libmachine: Docker is up and running!
	I0612 21:58:03.434238   86948 main.go:141] libmachine: Reticulating splines...
	I0612 21:58:03.434247   86948 client.go:171] duration metric: took 23.793089795s to LocalClient.Create
	I0612 21:58:03.434273   86948 start.go:167] duration metric: took 23.793159772s to libmachine.API.Create "newest-cni-007396"
	I0612 21:58:03.434286   86948 start.go:293] postStartSetup for "newest-cni-007396" (driver="kvm2")
	I0612 21:58:03.434298   86948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:58:03.434317   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.434571   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:58:03.434594   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.436668   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.436966   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.436998   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.437209   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.437409   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.437582   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.437706   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.526365   86948 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:58:03.530621   86948 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:58:03.530646   86948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:58:03.530713   86948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:58:03.531006   86948 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:58:03.531139   86948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:58:03.541890   86948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:58:03.567793   86948 start.go:296] duration metric: took 133.495039ms for postStartSetup
	I0612 21:58:03.567838   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetConfigRaw
	I0612 21:58:03.568519   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetIP
	I0612 21:58:03.571244   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.571648   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.571675   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.571966   86948 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/newest-cni-007396/config.json ...
	I0612 21:58:03.572180   86948 start.go:128] duration metric: took 23.948557924s to createHost
	I0612 21:58:03.572207   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.574448   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.574799   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.574824   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.575004   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.575225   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.575414   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.575577   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.575750   86948 main.go:141] libmachine: Using SSH client type: native
	I0612 21:58:03.575947   86948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I0612 21:58:03.575960   86948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:58:03.680255   86948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718229483.653291457
	
	I0612 21:58:03.680279   86948 fix.go:216] guest clock: 1718229483.653291457
	I0612 21:58:03.680288   86948 fix.go:229] Guest: 2024-06-12 21:58:03.653291457 +0000 UTC Remote: 2024-06-12 21:58:03.572192588 +0000 UTC m=+24.058769808 (delta=81.098869ms)
	I0612 21:58:03.680348   86948 fix.go:200] guest clock delta is within tolerance: 81.098869ms
	I0612 21:58:03.680359   86948 start.go:83] releasing machines lock for "newest-cni-007396", held for 24.056803081s
	I0612 21:58:03.680388   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.680651   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetIP
	I0612 21:58:03.683199   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.683495   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.683520   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.683694   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.684217   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.684420   86948 main.go:141] libmachine: (newest-cni-007396) Calling .DriverName
	I0612 21:58:03.684511   86948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:58:03.684561   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.684619   86948 ssh_runner.go:195] Run: cat /version.json
	I0612 21:58:03.684642   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHHostname
	I0612 21:58:03.687373   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.687651   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.687709   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.687765   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.687870   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.688095   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.688146   86948 main.go:141] libmachine: (newest-cni-007396) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:e1:fb", ip: ""} in network mk-newest-cni-007396: {Iface:virbr2 ExpiryTime:2024-06-12 22:57:54 +0000 UTC Type:0 Mac:52:54:00:a5:e1:fb Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:newest-cni-007396 Clientid:01:52:54:00:a5:e1:fb}
	I0612 21:58:03.688172   86948 main.go:141] libmachine: (newest-cni-007396) DBG | domain newest-cni-007396 has defined IP address 192.168.50.207 and MAC address 52:54:00:a5:e1:fb in network mk-newest-cni-007396
	I0612 21:58:03.688279   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.688389   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHPort
	I0612 21:58:03.688453   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.688521   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHKeyPath
	I0612 21:58:03.688685   86948 main.go:141] libmachine: (newest-cni-007396) Calling .GetSSHUsername
	I0612 21:58:03.688838   86948 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/newest-cni-007396/id_rsa Username:docker}
	I0612 21:58:03.764995   86948 ssh_runner.go:195] Run: systemctl --version
	I0612 21:58:03.787664   86948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:58:03.948904   86948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:58:03.955287   86948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:58:03.955368   86948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:58:03.973537   86948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:58:03.973563   86948 start.go:494] detecting cgroup driver to use...
	I0612 21:58:03.973630   86948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:58:03.991002   86948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:58:04.004854   86948 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:58:04.004913   86948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:58:04.019058   86948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:58:04.032658   86948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:58:04.158544   86948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:58:04.315596   86948 docker.go:233] disabling docker service ...
	I0612 21:58:04.315682   86948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:58:04.333215   86948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:58:04.350500   86948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:58:04.497343   86948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	
	
	==> CRI-O <==
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.576937272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229485576900963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d984b98e-8b5e-406a-9c55-b9e8f1eeb4fe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.577834276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a290c5a4-211e-4c02-a6cb-0f459325a03b name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.577928036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a290c5a4-211e-4c02-a6cb-0f459325a03b name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.578203249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6d77b024431184651a9e21a458220d2924f4a46103d49a982b82d76487f2ff9,PodSandboxId:f1c342424d4fa0d74624f4863e382e82f1be44d9213f285877a9484b51438e18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228661367728599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90368fec-12d9-4baf-aef6-233691b5e99d,},Annotations:map[string]string{io.kubernetes.container.hash: ab3c8dcd,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f8a87fdb0e00f5579536445325d8b2dc0cfa37844f8747f40d5357afb8cf87,PodSandboxId:31aefcd0f0a8003d4a35aec62f9a43f1dee6afbdf0995d48b1e4a19a3b1f7924,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228661055629396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsvvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6c768b-75e2-4c11-99db-1103367ccc20,},Annotations:map[string]string{io.kubernetes.container.hash: d5ad641f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bffb5002753da23404659493ed47336a599fda15e4fc48a8f22aa2146c588e85,PodSandboxId:61080e3d2ddf2e6660c3547cfa897a3b97dc067ee9f372872611c4828b04403f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228660869483777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v75tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
48ba7d-8f66-4c31-ac14-3a38e18fa249,},Annotations:map[string]string{io.kubernetes.container.hash: 728d435d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d1ee15d2a35f909b263e8c592ac6c6bd5a01dc4c45e530fd0a24db98e8eb88,PodSandboxId:580e786b47f15d101e18d13a9631f43760251be9d0147f8cbfbee81d637ed2d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718228660368472103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf1156c-ba02-4551-aefa-66379b05e066,},Annotations:map[string]string{io.kubernetes.container.hash: fb7cf440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5253531d0c365ba7a37fe180563ed113f68906bd040776c09bb7aef9562ac80e,PodSandboxId:c4d2a14f93a7daa4c51ebced3fa88df7372518d23e18408aac8ce801f85a0b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228640704921735,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fd711c83c9b417403b6a9e31847398,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d4fb81f507b1127559b8713eadff985fc51dfd8b7106a3a0c8ea9f28b027fc,PodSandboxId:c00063a5386b0f11c81d8e99f5364d71d24daa6724b1361f5d69d6edbc7610e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228640714242866,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e866e8b2a2984f62db205dab7b3e4f,},Annotations:map[string]string{io.kubernetes.container.hash: f64610c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8bcdefdd9089db199dd6927625d23ce5553cc46a0949830ebce16e23e24bf,PodSandboxId:f04141bf9a6264c590d76ba434b0444355cf3b456d397b8081ad0ddb52d0ceca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228640701638256,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59e66dff9d6757e593577e4be5a7bcf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b712747a34d00d68d998ce34e9f775f0ddf3fc9d427853334fc3d043d9bd617d,PodSandboxId:70e422e682f35fdd17cdbdad8183e193a35ac883ebbbb1b1e21fa43e0f4505f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228640607697101,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656e6a3b53b4be584918cbaf50560652,},Annotations:map[string]string{io.kubernetes.container.hash: 995ac9bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a290c5a4-211e-4c02-a6cb-0f459325a03b name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.628146635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=281f848e-38de-4fa0-a0ff-4a7d6737854b name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.628246930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=281f848e-38de-4fa0-a0ff-4a7d6737854b name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.629247716Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15599801-8c7b-41c6-b5ca-05731680d318 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.629851073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229485629827218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15599801-8c7b-41c6-b5ca-05731680d318 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.630730912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a15a6af6-864f-4495-acb9-dfdd317f90f7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.630847434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a15a6af6-864f-4495-acb9-dfdd317f90f7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.631083414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6d77b024431184651a9e21a458220d2924f4a46103d49a982b82d76487f2ff9,PodSandboxId:f1c342424d4fa0d74624f4863e382e82f1be44d9213f285877a9484b51438e18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228661367728599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90368fec-12d9-4baf-aef6-233691b5e99d,},Annotations:map[string]string{io.kubernetes.container.hash: ab3c8dcd,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f8a87fdb0e00f5579536445325d8b2dc0cfa37844f8747f40d5357afb8cf87,PodSandboxId:31aefcd0f0a8003d4a35aec62f9a43f1dee6afbdf0995d48b1e4a19a3b1f7924,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228661055629396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsvvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6c768b-75e2-4c11-99db-1103367ccc20,},Annotations:map[string]string{io.kubernetes.container.hash: d5ad641f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bffb5002753da23404659493ed47336a599fda15e4fc48a8f22aa2146c588e85,PodSandboxId:61080e3d2ddf2e6660c3547cfa897a3b97dc067ee9f372872611c4828b04403f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228660869483777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v75tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
48ba7d-8f66-4c31-ac14-3a38e18fa249,},Annotations:map[string]string{io.kubernetes.container.hash: 728d435d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d1ee15d2a35f909b263e8c592ac6c6bd5a01dc4c45e530fd0a24db98e8eb88,PodSandboxId:580e786b47f15d101e18d13a9631f43760251be9d0147f8cbfbee81d637ed2d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718228660368472103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf1156c-ba02-4551-aefa-66379b05e066,},Annotations:map[string]string{io.kubernetes.container.hash: fb7cf440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5253531d0c365ba7a37fe180563ed113f68906bd040776c09bb7aef9562ac80e,PodSandboxId:c4d2a14f93a7daa4c51ebced3fa88df7372518d23e18408aac8ce801f85a0b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228640704921735,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fd711c83c9b417403b6a9e31847398,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d4fb81f507b1127559b8713eadff985fc51dfd8b7106a3a0c8ea9f28b027fc,PodSandboxId:c00063a5386b0f11c81d8e99f5364d71d24daa6724b1361f5d69d6edbc7610e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228640714242866,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e866e8b2a2984f62db205dab7b3e4f,},Annotations:map[string]string{io.kubernetes.container.hash: f64610c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8bcdefdd9089db199dd6927625d23ce5553cc46a0949830ebce16e23e24bf,PodSandboxId:f04141bf9a6264c590d76ba434b0444355cf3b456d397b8081ad0ddb52d0ceca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228640701638256,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59e66dff9d6757e593577e4be5a7bcf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b712747a34d00d68d998ce34e9f775f0ddf3fc9d427853334fc3d043d9bd617d,PodSandboxId:70e422e682f35fdd17cdbdad8183e193a35ac883ebbbb1b1e21fa43e0f4505f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228640607697101,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656e6a3b53b4be584918cbaf50560652,},Annotations:map[string]string{io.kubernetes.container.hash: 995ac9bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a15a6af6-864f-4495-acb9-dfdd317f90f7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.680671959Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9159b9df-2d89-4730-867f-72f5d82ed89d name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.680793705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9159b9df-2d89-4730-867f-72f5d82ed89d name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.682174632Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5666333-4bf2-4658-aff7-9bd0345583a7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.683024878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229485682997820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5666333-4bf2-4658-aff7-9bd0345583a7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.683645032Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=147330ca-e425-4c8b-89bd-d5575359a138 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.683709055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=147330ca-e425-4c8b-89bd-d5575359a138 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.683944629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6d77b024431184651a9e21a458220d2924f4a46103d49a982b82d76487f2ff9,PodSandboxId:f1c342424d4fa0d74624f4863e382e82f1be44d9213f285877a9484b51438e18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228661367728599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90368fec-12d9-4baf-aef6-233691b5e99d,},Annotations:map[string]string{io.kubernetes.container.hash: ab3c8dcd,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f8a87fdb0e00f5579536445325d8b2dc0cfa37844f8747f40d5357afb8cf87,PodSandboxId:31aefcd0f0a8003d4a35aec62f9a43f1dee6afbdf0995d48b1e4a19a3b1f7924,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228661055629396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsvvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6c768b-75e2-4c11-99db-1103367ccc20,},Annotations:map[string]string{io.kubernetes.container.hash: d5ad641f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bffb5002753da23404659493ed47336a599fda15e4fc48a8f22aa2146c588e85,PodSandboxId:61080e3d2ddf2e6660c3547cfa897a3b97dc067ee9f372872611c4828b04403f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228660869483777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v75tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
48ba7d-8f66-4c31-ac14-3a38e18fa249,},Annotations:map[string]string{io.kubernetes.container.hash: 728d435d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d1ee15d2a35f909b263e8c592ac6c6bd5a01dc4c45e530fd0a24db98e8eb88,PodSandboxId:580e786b47f15d101e18d13a9631f43760251be9d0147f8cbfbee81d637ed2d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718228660368472103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf1156c-ba02-4551-aefa-66379b05e066,},Annotations:map[string]string{io.kubernetes.container.hash: fb7cf440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5253531d0c365ba7a37fe180563ed113f68906bd040776c09bb7aef9562ac80e,PodSandboxId:c4d2a14f93a7daa4c51ebced3fa88df7372518d23e18408aac8ce801f85a0b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228640704921735,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fd711c83c9b417403b6a9e31847398,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d4fb81f507b1127559b8713eadff985fc51dfd8b7106a3a0c8ea9f28b027fc,PodSandboxId:c00063a5386b0f11c81d8e99f5364d71d24daa6724b1361f5d69d6edbc7610e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228640714242866,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e866e8b2a2984f62db205dab7b3e4f,},Annotations:map[string]string{io.kubernetes.container.hash: f64610c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8bcdefdd9089db199dd6927625d23ce5553cc46a0949830ebce16e23e24bf,PodSandboxId:f04141bf9a6264c590d76ba434b0444355cf3b456d397b8081ad0ddb52d0ceca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228640701638256,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59e66dff9d6757e593577e4be5a7bcf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b712747a34d00d68d998ce34e9f775f0ddf3fc9d427853334fc3d043d9bd617d,PodSandboxId:70e422e682f35fdd17cdbdad8183e193a35ac883ebbbb1b1e21fa43e0f4505f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228640607697101,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656e6a3b53b4be584918cbaf50560652,},Annotations:map[string]string{io.kubernetes.container.hash: 995ac9bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=147330ca-e425-4c8b-89bd-d5575359a138 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.731019479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44cbfe37-46b4-427a-882b-2b19f31b360c name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.731096177Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44cbfe37-46b4-427a-882b-2b19f31b360c name=/runtime.v1.RuntimeService/Version
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.732847598Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc33bf7f-0719-41f4-8fe7-a635912f1c0b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.733319219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229485733287581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc33bf7f-0719-41f4-8fe7-a635912f1c0b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.733855731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5df1c5c9-969e-4ea5-aa35-9c8c536a7a7b name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.733928870Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5df1c5c9-969e-4ea5-aa35-9c8c536a7a7b name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:58:05 no-preload-087875 crio[720]: time="2024-06-12 21:58:05.734158893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6d77b024431184651a9e21a458220d2924f4a46103d49a982b82d76487f2ff9,PodSandboxId:f1c342424d4fa0d74624f4863e382e82f1be44d9213f285877a9484b51438e18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718228661367728599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90368fec-12d9-4baf-aef6-233691b5e99d,},Annotations:map[string]string{io.kubernetes.container.hash: ab3c8dcd,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54f8a87fdb0e00f5579536445325d8b2dc0cfa37844f8747f40d5357afb8cf87,PodSandboxId:31aefcd0f0a8003d4a35aec62f9a43f1dee6afbdf0995d48b1e4a19a3b1f7924,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228661055629396,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsvvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6c768b-75e2-4c11-99db-1103367ccc20,},Annotations:map[string]string{io.kubernetes.container.hash: d5ad641f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bffb5002753da23404659493ed47336a599fda15e4fc48a8f22aa2146c588e85,PodSandboxId:61080e3d2ddf2e6660c3547cfa897a3b97dc067ee9f372872611c4828b04403f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718228660869483777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v75tt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b
48ba7d-8f66-4c31-ac14-3a38e18fa249,},Annotations:map[string]string{io.kubernetes.container.hash: 728d435d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d1ee15d2a35f909b263e8c592ac6c6bd5a01dc4c45e530fd0a24db98e8eb88,PodSandboxId:580e786b47f15d101e18d13a9631f43760251be9d0147f8cbfbee81d637ed2d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718228660368472103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnhzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf1156c-ba02-4551-aefa-66379b05e066,},Annotations:map[string]string{io.kubernetes.container.hash: fb7cf440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5253531d0c365ba7a37fe180563ed113f68906bd040776c09bb7aef9562ac80e,PodSandboxId:c4d2a14f93a7daa4c51ebced3fa88df7372518d23e18408aac8ce801f85a0b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718228640704921735,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fd711c83c9b417403b6a9e31847398,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7d4fb81f507b1127559b8713eadff985fc51dfd8b7106a3a0c8ea9f28b027fc,PodSandboxId:c00063a5386b0f11c81d8e99f5364d71d24daa6724b1361f5d69d6edbc7610e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718228640714242866,Label
s:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68e866e8b2a2984f62db205dab7b3e4f,},Annotations:map[string]string{io.kubernetes.container.hash: f64610c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b8bcdefdd9089db199dd6927625d23ce5553cc46a0949830ebce16e23e24bf,PodSandboxId:f04141bf9a6264c590d76ba434b0444355cf3b456d397b8081ad0ddb52d0ceca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718228640701638256,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59e66dff9d6757e593577e4be5a7bcf,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b712747a34d00d68d998ce34e9f775f0ddf3fc9d427853334fc3d043d9bd617d,PodSandboxId:70e422e682f35fdd17cdbdad8183e193a35ac883ebbbb1b1e21fa43e0f4505f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718228640607697101,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-087875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656e6a3b53b4be584918cbaf50560652,},Annotations:map[string]string{io.kubernetes.container.hash: 995ac9bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5df1c5c9-969e-4ea5-aa35-9c8c536a7a7b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6d77b0244311       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   f1c342424d4fa       storage-provisioner
	54f8a87fdb0e0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   31aefcd0f0a80       coredns-7db6d8ff4d-hsvvf
	bffb5002753da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   61080e3d2ddf2       coredns-7db6d8ff4d-v75tt
	50d1ee15d2a35       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   13 minutes ago      Running             kube-proxy                0                   580e786b47f15       kube-proxy-lnhzt
	e7d4fb81f507b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   c00063a5386b0       etcd-no-preload-087875
	5253531d0c365       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   14 minutes ago      Running             kube-controller-manager   2                   c4d2a14f93a7d       kube-controller-manager-no-preload-087875
	d2b8bcdefdd90       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   14 minutes ago      Running             kube-scheduler            2                   f04141bf9a626       kube-scheduler-no-preload-087875
	b712747a34d00       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   14 minutes ago      Running             kube-apiserver            2                   70e422e682f35       kube-apiserver-no-preload-087875
	
	
	==> coredns [54f8a87fdb0e00f5579536445325d8b2dc0cfa37844f8747f40d5357afb8cf87] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bffb5002753da23404659493ed47336a599fda15e4fc48a8f22aa2146c588e85] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-087875
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-087875
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79
	                    minikube.k8s.io/name=no-preload-087875
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T21_44_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:44:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-087875
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:58:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:54:40 +0000   Wed, 12 Jun 2024 21:44:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:54:40 +0000   Wed, 12 Jun 2024 21:44:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:54:40 +0000   Wed, 12 Jun 2024 21:44:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:54:40 +0000   Wed, 12 Jun 2024 21:44:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.63
	  Hostname:    no-preload-087875
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 532c93e5ec184a2db3681bc0b10a099e
	  System UUID:                532c93e5-ec18-4a2d-b368-1bc0b10a099e
	  Boot ID:                    0a715db4-7372-4169-a63b-2b81aa42ebc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-hsvvf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-v75tt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-087875                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-087875             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-087875    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-lnhzt                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-087875             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-569cc877fc-mdmgw              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-087875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-087875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-087875 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-087875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-087875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-087875 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-087875 event: Registered Node no-preload-087875 in Controller
	
	
	==> dmesg <==
	[  +0.060200] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045199] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.966412] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.482080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.623128] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.603726] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.062277] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063219] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.211055] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.139166] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.289199] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[Jun12 21:39] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[  +0.057900] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.934017] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +4.603380] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.461433] kauditd_printk_skb: 79 callbacks suppressed
	[Jun12 21:43] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.919758] systemd-fstab-generator[4009]: Ignoring "noauto" option for root device
	[Jun12 21:44] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.968509] systemd-fstab-generator[4337]: Ignoring "noauto" option for root device
	[ +13.397948] systemd-fstab-generator[4528]: Ignoring "noauto" option for root device
	[  +0.095180] kauditd_printk_skb: 14 callbacks suppressed
	[Jun12 21:45] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [e7d4fb81f507b1127559b8713eadff985fc51dfd8b7106a3a0c8ea9f28b027fc] <==
	{"level":"info","ts":"2024-06-12T21:44:01.031455Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a33ce7b54d42dc99","initial-advertise-peer-urls":["https://192.168.72.63:2380"],"listen-peer-urls":["https://192.168.72.63:2380"],"advertise-client-urls":["https://192.168.72.63:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.63:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-12T21:44:01.031609Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-12T21:44:01.03196Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.63:2380"}
	{"level":"info","ts":"2024-06-12T21:44:01.032005Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.63:2380"}
	{"level":"info","ts":"2024-06-12T21:44:01.096139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-12T21:44:01.096248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-12T21:44:01.096313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 received MsgPreVoteResp from a33ce7b54d42dc99 at term 1"}
	{"level":"info","ts":"2024-06-12T21:44:01.096425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 became candidate at term 2"}
	{"level":"info","ts":"2024-06-12T21:44:01.096457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 received MsgVoteResp from a33ce7b54d42dc99 at term 2"}
	{"level":"info","ts":"2024-06-12T21:44:01.096488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a33ce7b54d42dc99 became leader at term 2"}
	{"level":"info","ts":"2024-06-12T21:44:01.096568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a33ce7b54d42dc99 elected leader a33ce7b54d42dc99 at term 2"}
	{"level":"info","ts":"2024-06-12T21:44:01.100857Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:44:01.101254Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a33ce7b54d42dc99","local-member-attributes":"{Name:no-preload-087875 ClientURLs:[https://192.168.72.63:2379]}","request-path":"/0/members/a33ce7b54d42dc99/attributes","cluster-id":"cf3413fd070cd1a3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-12T21:44:01.101425Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:44:01.103705Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cf3413fd070cd1a3","local-member-id":"a33ce7b54d42dc99","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:44:01.104207Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:44:01.104255Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:44:01.105832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-12T21:44:01.10373Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:44:01.111696Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T21:44:01.115595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-12T21:44:01.11711Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.63:2379"}
	{"level":"info","ts":"2024-06-12T21:54:01.670892Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":717}
	{"level":"info","ts":"2024-06-12T21:54:01.681239Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":717,"took":"9.897851ms","hash":1520246268,"current-db-size-bytes":2293760,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2293760,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-06-12T21:54:01.681314Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1520246268,"revision":717,"compact-revision":-1}
	
	
	==> kernel <==
	 21:58:06 up 19 min,  0 users,  load average: 0.01, 0.13, 0.15
	Linux no-preload-087875 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b712747a34d00d68d998ce34e9f775f0ddf3fc9d427853334fc3d043d9bd617d] <==
	I0612 21:52:04.465625       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:54:03.467243       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:54:03.467668       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0612 21:54:04.468764       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:54:04.468831       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:54:04.468849       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:54:04.468762       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:54:04.468937       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:54:04.470225       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:55:04.469936       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:55:04.470095       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:55:04.470109       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:55:04.471107       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:55:04.471259       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:55:04.471311       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:57:04.471122       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:57:04.471477       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0612 21:57:04.471566       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0612 21:57:04.471602       1 handler_proxy.go:93] no RequestInfo found in the context
	E0612 21:57:04.471712       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0612 21:57:04.473270       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5253531d0c365ba7a37fe180563ed113f68906bd040776c09bb7aef9562ac80e] <==
	I0612 21:52:19.917867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:52:49.455865       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:52:49.926124       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:53:19.462013       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:53:19.933418       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:53:49.468790       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:53:49.941721       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:54:19.475868       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:54:19.949275       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:54:49.480214       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:54:49.958287       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:55:19.486009       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:55:19.968962       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0612 21:55:33.294257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="226.06µs"
	I0612 21:55:44.296451       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="544.817µs"
	E0612 21:55:49.492083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:55:49.979759       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:56:19.498004       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:56:19.987474       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:56:49.502723       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:56:49.997260       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:57:19.508476       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:57:20.007126       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0612 21:57:49.514720       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0612 21:57:50.016310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [50d1ee15d2a35f909b263e8c592ac6c6bd5a01dc4c45e530fd0a24db98e8eb88] <==
	I0612 21:44:20.697702       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:44:20.716607       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.63"]
	I0612 21:44:21.298595       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:44:21.298696       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:44:21.298784       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:44:21.338956       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:44:21.339309       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:44:21.339333       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:44:21.340794       1 config.go:192] "Starting service config controller"
	I0612 21:44:21.340825       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:44:21.340850       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:44:21.340853       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:44:21.344037       1 config.go:319] "Starting node config controller"
	I0612 21:44:21.344068       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:44:21.441013       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 21:44:21.441074       1 shared_informer.go:320] Caches are synced for service config
	I0612 21:44:21.444495       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d2b8bcdefdd9089db199dd6927625d23ce5553cc46a0949830ebce16e23e24bf] <==
	E0612 21:44:03.489350       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 21:44:03.489353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 21:44:04.304264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 21:44:04.304387       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 21:44:04.326434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 21:44:04.326627       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 21:44:04.347248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0612 21:44:04.349972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0612 21:44:04.373781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 21:44:04.373882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 21:44:04.397288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0612 21:44:04.397448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0612 21:44:04.468677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0612 21:44:04.468803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0612 21:44:04.492945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0612 21:44:04.492974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0612 21:44:04.585489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 21:44:04.585594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 21:44:04.680183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0612 21:44:04.680340       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0612 21:44:04.692101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0612 21:44:04.692228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 21:44:04.794497       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 21:44:04.794681       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 21:44:06.973814       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:55:59 no-preload-087875 kubelet[4344]: E0612 21:55:59.272192    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:56:06 no-preload-087875 kubelet[4344]: E0612 21:56:06.295009    4344 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:56:06 no-preload-087875 kubelet[4344]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:56:06 no-preload-087875 kubelet[4344]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:56:06 no-preload-087875 kubelet[4344]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:56:06 no-preload-087875 kubelet[4344]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:56:14 no-preload-087875 kubelet[4344]: E0612 21:56:14.272959    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:56:29 no-preload-087875 kubelet[4344]: E0612 21:56:29.273631    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:56:42 no-preload-087875 kubelet[4344]: E0612 21:56:42.273281    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:56:54 no-preload-087875 kubelet[4344]: E0612 21:56:54.272166    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:57:06 no-preload-087875 kubelet[4344]: E0612 21:57:06.274611    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:57:06 no-preload-087875 kubelet[4344]: E0612 21:57:06.293930    4344 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:57:06 no-preload-087875 kubelet[4344]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:57:06 no-preload-087875 kubelet[4344]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:57:06 no-preload-087875 kubelet[4344]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:57:06 no-preload-087875 kubelet[4344]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:57:19 no-preload-087875 kubelet[4344]: E0612 21:57:19.277060    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:57:33 no-preload-087875 kubelet[4344]: E0612 21:57:33.273904    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:57:46 no-preload-087875 kubelet[4344]: E0612 21:57:46.279495    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:57:58 no-preload-087875 kubelet[4344]: E0612 21:57:58.274298    4344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mdmgw" podUID="17725ee6-1d17-4a1b-9c65-f596b9b7725f"
	Jun 12 21:58:06 no-preload-087875 kubelet[4344]: E0612 21:58:06.314573    4344 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:58:06 no-preload-087875 kubelet[4344]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:58:06 no-preload-087875 kubelet[4344]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:58:06 no-preload-087875 kubelet[4344]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:58:06 no-preload-087875 kubelet[4344]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [b6d77b024431184651a9e21a458220d2924f4a46103d49a982b82d76487f2ff9] <==
	I0612 21:44:21.665248       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0612 21:44:21.706069       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0612 21:44:21.706156       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0612 21:44:21.745263       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0612 21:44:21.745444       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-087875_6913d51f-6e50-41ee-ab1b-5c13c878778d!
	I0612 21:44:21.753575       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8d24827-03ed-4e6c-852e-2afbc0f4308a", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-087875_6913d51f-6e50-41ee-ab1b-5c13c878778d became leader
	I0612 21:44:21.845978       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-087875_6913d51f-6e50-41ee-ab1b-5c13c878778d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-087875 -n no-preload-087875
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-087875 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-mdmgw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-087875 describe pod metrics-server-569cc877fc-mdmgw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-087875 describe pod metrics-server-569cc877fc-mdmgw: exit status 1 (72.009069ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-mdmgw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-087875 describe pod metrics-server-569cc877fc-mdmgw: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (280.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (116.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:56:26.516975   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/calico-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:56:48.612990   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:57:06.289777   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
E0612 21:57:29.497835   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.81:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.81:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-983302 -n old-k8s-version-983302
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 2 (238.370907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-983302" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-983302 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-983302 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.262µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-983302 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 2 (224.774522ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-983302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-983302 logs -n 25: (1.585032804s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| delete  | -p bridge-701638                                       | bridge-701638                | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-701638                           | enable-default-cni-701638    | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-576552 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:28 UTC |
	|         | disable-driver-mounts-576552                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:28 UTC | 12 Jun 24 21:30 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-087875             | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-376087  | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-591460            | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC | 12 Jun 24 21:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:30 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-983302        | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-087875                  | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-376087       | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-087875                                   | no-preload-087875            | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-376087 | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC | 12 Jun 24 21:42 UTC |
	|         | default-k8s-diff-port-376087                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-591460                 | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-591460                                  | embed-certs-591460           | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-983302             | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC | 12 Jun 24 21:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-983302                              | old-k8s-version-983302       | jenkins | v1.33.1 | 12 Jun 24 21:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 21:33:52
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 21:33:52.855557   80762 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:33:52.855829   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.855839   80762 out.go:304] Setting ErrFile to fd 2...
	I0612 21:33:52.855845   80762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:33:52.856037   80762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:33:52.856582   80762 out.go:298] Setting JSON to false
	I0612 21:33:52.857472   80762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8178,"bootTime":1718219855,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:33:52.857527   80762 start.go:139] virtualization: kvm guest
	I0612 21:33:52.859369   80762 out.go:177] * [old-k8s-version-983302] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:33:52.860886   80762 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:33:52.860907   80762 notify.go:220] Checking for updates...
	I0612 21:33:52.862185   80762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:33:52.863642   80762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:33:52.865031   80762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:33:52.866306   80762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:33:52.867535   80762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:33:52.869148   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:33:52.869530   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.869597   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.884278   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0612 21:33:52.884743   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.885211   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.885234   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.885575   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.885768   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.887577   80762 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0612 21:33:52.888972   80762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:33:52.889265   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:33:52.889296   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:33:52.903649   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
	I0612 21:33:52.904087   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:33:52.904500   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:33:52.904518   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:33:52.904831   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:33:52.904988   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:33:52.939030   80762 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 21:33:52.940484   80762 start.go:297] selected driver: kvm2
	I0612 21:33:52.940497   80762 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.940622   80762 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:33:52.941314   80762 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.941389   80762 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 21:33:52.956273   80762 install.go:137] /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0612 21:33:52.956646   80762 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:33:52.956674   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:33:52.956682   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:33:52.956715   80762 start.go:340] cluster config:
	{Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:33:52.956828   80762 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 21:33:52.958634   80762 out.go:177] * Starting "old-k8s-version-983302" primary control-plane node in "old-k8s-version-983302" cluster
	I0612 21:33:52.959924   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:33:52.959963   80762 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0612 21:33:52.959970   80762 cache.go:56] Caching tarball of preloaded images
	I0612 21:33:52.960065   80762 preload.go:173] Found /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0612 21:33:52.960079   80762 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0612 21:33:52.960190   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:33:52.960397   80762 start.go:360] acquireMachinesLock for old-k8s-version-983302: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:33:57.423439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:00.495475   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:06.575478   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:09.647560   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:15.727510   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:18.799491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:24.879423   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:27.951495   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:34.031457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:37.103569   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:43.183470   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:46.255491   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:52.335452   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:34:55.407544   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:01.487489   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:04.559546   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:10.639492   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:13.711372   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:19.791460   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:22.863455   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:28.943506   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:32.015443   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:38.095436   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:41.167526   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:47.247485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:50.319435   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:56.399471   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:35:59.471485   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:05.551493   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:08.623467   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:14.703401   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:17.775479   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:23.855516   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:26.927418   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:33.007439   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:36.079449   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:42.159480   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:45.231482   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:51.311424   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:36:54.383524   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:00.463466   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:03.535465   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:09.615457   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:12.687462   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:18.767463   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:21.839431   80157 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.63:22: connect: no route to host
	I0612 21:37:24.843967   80243 start.go:364] duration metric: took 4m34.377488728s to acquireMachinesLock for "default-k8s-diff-port-376087"
	I0612 21:37:24.844034   80243 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:24.844046   80243 fix.go:54] fixHost starting: 
	I0612 21:37:24.844649   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:24.844689   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:24.859743   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0612 21:37:24.860227   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:24.860659   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:37:24.860680   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:24.861055   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:24.861352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:24.861550   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:37:24.863507   80243 fix.go:112] recreateIfNeeded on default-k8s-diff-port-376087: state=Stopped err=<nil>
	I0612 21:37:24.863538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	W0612 21:37:24.863708   80243 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:24.865564   80243 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-376087" ...
	I0612 21:37:24.866899   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Start
	I0612 21:37:24.867064   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring networks are active...
	I0612 21:37:24.867951   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network default is active
	I0612 21:37:24.868390   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Ensuring network mk-default-k8s-diff-port-376087 is active
	I0612 21:37:24.868746   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Getting domain xml...
	I0612 21:37:24.869408   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Creating domain...
	I0612 21:37:24.841481   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:24.841529   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.841912   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:37:24.841938   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:37:24.842149   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:37:24.843818   80157 machine.go:97] duration metric: took 4m37.413209096s to provisionDockerMachine
	I0612 21:37:24.843853   80157 fix.go:56] duration metric: took 4m37.434262933s for fixHost
	I0612 21:37:24.843860   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 4m37.434303466s
	W0612 21:37:24.843897   80157 start.go:713] error starting host: provision: host is not running
	W0612 21:37:24.843971   80157 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0612 21:37:24.843980   80157 start.go:728] Will try again in 5 seconds ...
	I0612 21:37:26.077364   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting to get IP...
	I0612 21:37:26.078173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078646   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.078686   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.078611   81491 retry.go:31] will retry after 224.429366ms: waiting for machine to come up
	I0612 21:37:26.305227   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305668   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.305699   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.305627   81491 retry.go:31] will retry after 298.325251ms: waiting for machine to come up
	I0612 21:37:26.605155   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605587   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.605622   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.605558   81491 retry.go:31] will retry after 327.789765ms: waiting for machine to come up
	I0612 21:37:26.935066   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935536   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:26.935567   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:26.935477   81491 retry.go:31] will retry after 381.56012ms: waiting for machine to come up
	I0612 21:37:27.319036   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.319516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.319429   81491 retry.go:31] will retry after 474.663822ms: waiting for machine to come up
	I0612 21:37:27.796149   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796596   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:27.796635   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:27.796564   81491 retry.go:31] will retry after 943.868595ms: waiting for machine to come up
	I0612 21:37:28.741715   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742226   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:28.742259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:28.742180   81491 retry.go:31] will retry after 1.014472282s: waiting for machine to come up
	I0612 21:37:29.758384   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758928   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:29.758947   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:29.758867   81491 retry.go:31] will retry after 971.872729ms: waiting for machine to come up
	I0612 21:37:29.845647   80157 start.go:360] acquireMachinesLock for no-preload-087875: {Name:mk2e80cace89023dedef428df19ba26686582917 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 21:37:30.732362   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732794   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:30.732827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:30.732742   81491 retry.go:31] will retry after 1.352202491s: waiting for machine to come up
	I0612 21:37:32.087272   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087702   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:32.087726   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:32.087663   81491 retry.go:31] will retry after 2.276552983s: waiting for machine to come up
	I0612 21:37:34.367159   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367579   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:34.367613   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:34.367520   81491 retry.go:31] will retry after 1.785262755s: waiting for machine to come up
	I0612 21:37:36.154927   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:36.155412   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:36.155357   81491 retry.go:31] will retry after 3.309693081s: waiting for machine to come up
	I0612 21:37:39.468800   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469443   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | unable to find current IP address of domain default-k8s-diff-port-376087 in network mk-default-k8s-diff-port-376087
	I0612 21:37:39.469469   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | I0612 21:37:39.469393   81491 retry.go:31] will retry after 4.284995408s: waiting for machine to come up
	I0612 21:37:45.096430   80404 start.go:364] duration metric: took 4m40.295909999s to acquireMachinesLock for "embed-certs-591460"
	I0612 21:37:45.096485   80404 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:37:45.096490   80404 fix.go:54] fixHost starting: 
	I0612 21:37:45.096932   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:37:45.096972   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:37:45.113819   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
	I0612 21:37:45.114290   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:37:45.114823   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:37:45.114843   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:37:45.115208   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:37:45.115415   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:37:45.115578   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:37:45.117131   80404 fix.go:112] recreateIfNeeded on embed-certs-591460: state=Stopped err=<nil>
	I0612 21:37:45.117156   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	W0612 21:37:45.117324   80404 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:37:45.119535   80404 out.go:177] * Restarting existing kvm2 VM for "embed-certs-591460" ...
	I0612 21:37:43.759195   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759548   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Found IP for machine: 192.168.61.80
	I0612 21:37:43.759575   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has current primary IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.759583   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserving static IP address...
	I0612 21:37:43.760031   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Reserved static IP address: 192.168.61.80
	I0612 21:37:43.760063   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.760075   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Waiting for SSH to be available...
	I0612 21:37:43.760120   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | skip adding static IP to network mk-default-k8s-diff-port-376087 - found existing host DHCP lease matching {name: "default-k8s-diff-port-376087", mac: "52:54:00:01:75:58", ip: "192.168.61.80"}
	I0612 21:37:43.760134   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Getting to WaitForSSH function...
	I0612 21:37:43.762259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.762626   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.762741   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH client type: external
	I0612 21:37:43.762771   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa (-rw-------)
	I0612 21:37:43.762804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:37:43.762842   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | About to run SSH command:
	I0612 21:37:43.762860   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | exit 0
	I0612 21:37:43.891446   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | SSH cmd err, output: <nil>: 
	I0612 21:37:43.891831   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetConfigRaw
	I0612 21:37:43.892485   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:43.895220   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895625   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.895656   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.895928   80243 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/config.json ...
	I0612 21:37:43.896140   80243 machine.go:94] provisionDockerMachine start ...
	I0612 21:37:43.896161   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:43.896388   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:43.898898   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899317   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:43.899346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:43.899539   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:43.899727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.899868   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:43.900019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:43.900171   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:43.900360   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:43.900371   80243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:37:44.016295   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:37:44.016327   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016577   80243 buildroot.go:166] provisioning hostname "default-k8s-diff-port-376087"
	I0612 21:37:44.016602   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.016804   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.019396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019732   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.019763   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.019881   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.020084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020214   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.020418   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.020612   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.020803   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.020820   80243 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-376087 && echo "default-k8s-diff-port-376087" | sudo tee /etc/hostname
	I0612 21:37:44.146019   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-376087
	
	I0612 21:37:44.146049   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.148758   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149204   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.149238   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.149356   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.149538   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.149873   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.150013   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.150187   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.150204   80243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-376087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-376087/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-376087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:37:44.272821   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:37:44.272852   80243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:37:44.272887   80243 buildroot.go:174] setting up certificates
	I0612 21:37:44.272895   80243 provision.go:84] configureAuth start
	I0612 21:37:44.272903   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetMachineName
	I0612 21:37:44.273185   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:44.275991   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276337   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.276366   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.276591   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.279011   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279370   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.279396   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.279521   80243 provision.go:143] copyHostCerts
	I0612 21:37:44.279576   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:37:44.279585   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:37:44.279649   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:37:44.279740   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:37:44.279748   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:37:44.279770   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:37:44.279828   80243 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:37:44.279835   80243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:37:44.279855   80243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:37:44.279914   80243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-376087 san=[127.0.0.1 192.168.61.80 default-k8s-diff-port-376087 localhost minikube]
	I0612 21:37:44.410909   80243 provision.go:177] copyRemoteCerts
	I0612 21:37:44.410974   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:37:44.410999   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.413740   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414140   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.414173   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.414406   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.414597   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.414759   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.414904   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.501641   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:37:44.526082   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0612 21:37:44.549455   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:37:44.572447   80243 provision.go:87] duration metric: took 299.539656ms to configureAuth
	I0612 21:37:44.572473   80243 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:37:44.572632   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:37:44.572731   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.575518   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.575913   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.575948   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.576170   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.576383   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576553   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.576754   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.576913   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.577134   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.577155   80243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:37:44.851891   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:37:44.851922   80243 machine.go:97] duration metric: took 955.766062ms to provisionDockerMachine
	I0612 21:37:44.851936   80243 start.go:293] postStartSetup for "default-k8s-diff-port-376087" (driver="kvm2")
	I0612 21:37:44.851951   80243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:37:44.851970   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:44.852318   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:37:44.852352   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.855231   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855556   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.855595   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.855727   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.855935   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.856127   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.856260   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:44.941821   80243 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:37:44.946013   80243 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:37:44.946052   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:37:44.946120   80243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:37:44.946200   80243 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:37:44.946281   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:37:44.955467   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:44.979379   80243 start.go:296] duration metric: took 127.428385ms for postStartSetup
	I0612 21:37:44.979421   80243 fix.go:56] duration metric: took 20.135375416s for fixHost
	I0612 21:37:44.979445   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:44.981891   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982259   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:44.982287   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:44.982520   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:44.982713   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.982920   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:44.983040   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:44.983220   80243 main.go:141] libmachine: Using SSH client type: native
	I0612 21:37:44.983450   80243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0612 21:37:44.983467   80243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:37:45.096266   80243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228265.072559389
	
	I0612 21:37:45.096288   80243 fix.go:216] guest clock: 1718228265.072559389
	I0612 21:37:45.096295   80243 fix.go:229] Guest: 2024-06-12 21:37:45.072559389 +0000 UTC Remote: 2024-06-12 21:37:44.979426071 +0000 UTC m=+294.653210040 (delta=93.133318ms)
	I0612 21:37:45.096313   80243 fix.go:200] guest clock delta is within tolerance: 93.133318ms
	I0612 21:37:45.096318   80243 start.go:83] releasing machines lock for "default-k8s-diff-port-376087", held for 20.252307995s
	I0612 21:37:45.096346   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.096683   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:45.099332   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099761   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.099805   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.099902   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100560   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:37:45.100841   80243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:37:45.100880   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.100981   80243 ssh_runner.go:195] Run: cat /version.json
	I0612 21:37:45.101007   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:37:45.103590   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.103774   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104052   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104084   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104186   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:45.104202   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104210   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:45.104417   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:37:45.104430   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104650   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104651   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:37:45.104837   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:37:45.104852   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.104993   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:37:45.208199   80243 ssh_runner.go:195] Run: systemctl --version
	I0612 21:37:45.214375   80243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:37:45.370991   80243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:37:45.378676   80243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:37:45.378744   80243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:37:45.400622   80243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:37:45.400642   80243 start.go:494] detecting cgroup driver to use...
	I0612 21:37:45.400709   80243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:37:45.416775   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:37:45.430261   80243 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:37:45.430314   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:37:45.445482   80243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:37:45.461471   80243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:37:45.578411   80243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:37:45.750493   80243 docker.go:233] disabling docker service ...
	I0612 21:37:45.750556   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:37:45.769072   80243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:37:45.784755   80243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:37:45.907970   80243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:37:46.031847   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:37:46.046473   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:37:46.067764   80243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:37:46.067813   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.080604   80243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:37:46.080660   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.093611   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.104443   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.117070   80243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:37:46.128759   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.139977   80243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.157893   80243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:37:46.168896   80243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:37:46.179765   80243 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:37:46.179816   80243 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:37:46.194059   80243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:37:46.205474   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:46.322562   80243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:37:46.479073   80243 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:37:46.479149   80243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:37:46.484557   80243 start.go:562] Will wait 60s for crictl version
	I0612 21:37:46.484609   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:37:46.488403   80243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:37:46.529210   80243 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:37:46.529301   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.561476   80243 ssh_runner.go:195] Run: crio --version
	I0612 21:37:46.594477   80243 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:37:45.120900   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Start
	I0612 21:37:45.121084   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring networks are active...
	I0612 21:37:45.121776   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network default is active
	I0612 21:37:45.122108   80404 main.go:141] libmachine: (embed-certs-591460) Ensuring network mk-embed-certs-591460 is active
	I0612 21:37:45.122554   80404 main.go:141] libmachine: (embed-certs-591460) Getting domain xml...
	I0612 21:37:45.123260   80404 main.go:141] libmachine: (embed-certs-591460) Creating domain...
	I0612 21:37:46.357867   80404 main.go:141] libmachine: (embed-certs-591460) Waiting to get IP...
	I0612 21:37:46.358704   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.359164   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.359265   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.359144   81627 retry.go:31] will retry after 278.948395ms: waiting for machine to come up
	I0612 21:37:46.639971   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.640491   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.640523   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.640433   81627 retry.go:31] will retry after 342.550517ms: waiting for machine to come up
	I0612 21:37:46.985065   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:46.985590   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:46.985618   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:46.985548   81627 retry.go:31] will retry after 297.683214ms: waiting for machine to come up
	I0612 21:37:47.285192   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.285650   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.285688   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.285615   81627 retry.go:31] will retry after 415.994572ms: waiting for machine to come up
	I0612 21:37:47.702894   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:47.703398   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:47.703424   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:47.703353   81627 retry.go:31] will retry after 672.441633ms: waiting for machine to come up
	I0612 21:37:48.377227   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:48.377772   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:48.377802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:48.377735   81627 retry.go:31] will retry after 790.165478ms: waiting for machine to come up
	I0612 21:37:49.169651   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:49.170194   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:49.170224   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:49.170134   81627 retry.go:31] will retry after 953.609739ms: waiting for machine to come up
	I0612 21:37:46.595772   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetIP
	I0612 21:37:46.599221   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599682   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:37:46.599712   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:37:46.599919   80243 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0612 21:37:46.604573   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:46.617274   80243 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:37:46.617388   80243 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:37:46.617443   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:46.663227   80243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:37:46.663306   80243 ssh_runner.go:195] Run: which lz4
	I0612 21:37:46.667878   80243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:37:46.672384   80243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:37:46.672416   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:37:48.195844   80243 crio.go:462] duration metric: took 1.527996646s to copy over tarball
	I0612 21:37:48.195908   80243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:37:50.125800   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:50.126305   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:50.126337   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:50.126260   81627 retry.go:31] will retry after 938.251336ms: waiting for machine to come up
	I0612 21:37:51.065851   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:51.066225   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:51.066247   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:51.066194   81627 retry.go:31] will retry after 1.635454683s: waiting for machine to come up
	I0612 21:37:52.704193   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:52.704663   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:52.704687   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:52.704633   81627 retry.go:31] will retry after 1.56455027s: waiting for machine to come up
	I0612 21:37:54.271391   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:54.271873   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:54.271919   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:54.271826   81627 retry.go:31] will retry after 2.052574222s: waiting for machine to come up
	I0612 21:37:50.464553   80243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.268615304s)
	I0612 21:37:50.464601   80243 crio.go:469] duration metric: took 2.268715227s to extract the tarball
	I0612 21:37:50.464612   80243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:37:50.502406   80243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:37:50.550796   80243 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:37:50.550821   80243 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:37:50.550831   80243 kubeadm.go:928] updating node { 192.168.61.80 8444 v1.30.1 crio true true} ...
	I0612 21:37:50.550957   80243 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-376087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:37:50.551042   80243 ssh_runner.go:195] Run: crio config
	I0612 21:37:50.603232   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:50.603256   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:50.603268   80243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:37:50.603299   80243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.80 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-376087 NodeName:default-k8s-diff-port-376087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:37:50.603459   80243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.80
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-376087"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:37:50.603524   80243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:37:50.614003   80243 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:37:50.614082   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:37:50.623416   80243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0612 21:37:50.640203   80243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:37:50.656668   80243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0612 21:37:50.674601   80243 ssh_runner.go:195] Run: grep 192.168.61.80	control-plane.minikube.internal$ /etc/hosts
	I0612 21:37:50.678858   80243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:37:50.692389   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:37:50.822225   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:37:50.840703   80243 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087 for IP: 192.168.61.80
	I0612 21:37:50.840734   80243 certs.go:194] generating shared ca certs ...
	I0612 21:37:50.840758   80243 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:37:50.840936   80243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:37:50.840986   80243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:37:50.840999   80243 certs.go:256] generating profile certs ...
	I0612 21:37:50.841133   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/client.key
	I0612 21:37:50.841200   80243 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key.0afce446
	I0612 21:37:50.841238   80243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key
	I0612 21:37:50.841357   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:37:50.841398   80243 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:37:50.841409   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:37:50.841438   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:37:50.841469   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:37:50.841489   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:37:50.841529   80243 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:37:50.842311   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:37:50.880075   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:37:50.914504   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:37:50.945724   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:37:50.975702   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0612 21:37:51.009817   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:37:51.039086   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:37:51.064146   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/default-k8s-diff-port-376087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:37:51.088483   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:37:51.112785   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:37:51.136192   80243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:37:51.159239   80243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:37:51.175719   80243 ssh_runner.go:195] Run: openssl version
	I0612 21:37:51.181707   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:37:51.193498   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198415   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.198475   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:37:51.204601   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:37:51.216354   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:37:51.231979   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.236952   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.237018   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:37:51.243461   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:37:51.258481   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:37:51.273412   80243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279356   80243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.279420   80243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:37:51.285551   80243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:37:51.298066   80243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:37:51.302791   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:37:51.309402   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:37:51.316170   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:37:51.322785   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:37:51.329066   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:37:51.335031   80243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:37:51.340945   80243 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-376087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-376087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:37:51.341082   80243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:37:51.341143   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.383011   80243 cri.go:89] found id: ""
	I0612 21:37:51.383134   80243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:37:51.394768   80243 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:37:51.394794   80243 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:37:51.394800   80243 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:37:51.394852   80243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:37:51.408147   80243 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:37:51.409094   80243 kubeconfig.go:125] found "default-k8s-diff-port-376087" server: "https://192.168.61.80:8444"
	I0612 21:37:51.411221   80243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:37:51.421897   80243 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.80
	I0612 21:37:51.421934   80243 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:37:51.421949   80243 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:37:51.422029   80243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:37:51.470321   80243 cri.go:89] found id: ""
	I0612 21:37:51.470441   80243 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:37:51.488369   80243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:37:51.498367   80243 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:37:51.498388   80243 kubeadm.go:156] found existing configuration files:
	
	I0612 21:37:51.498449   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0612 21:37:51.510212   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:37:51.510287   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:37:51.520231   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0612 21:37:51.529270   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:37:51.529339   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:37:51.538902   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.548593   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:37:51.548652   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:37:51.558533   80243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0612 21:37:51.567995   80243 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:37:51.568063   80243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:37:51.577695   80243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:37:51.587794   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:51.718155   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.602448   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.820456   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.901167   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:52.977502   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:37:52.977606   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.477802   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.977879   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:37:53.995753   80243 api_server.go:72] duration metric: took 1.018251882s to wait for apiserver process to appear ...
	I0612 21:37:53.995788   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:37:53.995812   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:53.996308   80243 api_server.go:269] stopped: https://192.168.61.80:8444/healthz: Get "https://192.168.61.80:8444/healthz": dial tcp 192.168.61.80:8444: connect: connection refused
	I0612 21:37:54.496045   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.293362   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:37:57.293394   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:37:57.293408   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.395854   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.395886   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.496122   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:57.505090   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:57.505124   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:57.996334   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.000606   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:37:58.000646   80243 api_server.go:103] status: https://192.168.61.80:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:37:58.496177   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:37:58.504422   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:37:58.513123   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:37:58.513150   80243 api_server.go:131] duration metric: took 4.517354722s to wait for apiserver health ...
	I0612 21:37:58.513158   80243 cni.go:84] Creating CNI manager for ""
	I0612 21:37:58.513163   80243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:37:58.514696   80243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:37:56.325937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:56.326316   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:56.326343   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:56.326261   81627 retry.go:31] will retry after 3.51636746s: waiting for machine to come up
	I0612 21:37:58.516091   80243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:37:58.541034   80243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:37:58.585635   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:37:58.596829   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:37:58.596859   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:37:58.596867   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:37:58.596873   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:37:58.596883   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:37:58.596888   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 21:37:58.596893   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:37:58.596899   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:37:58.596904   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:37:58.596910   80243 system_pods.go:74] duration metric: took 11.248328ms to wait for pod list to return data ...
	I0612 21:37:58.596917   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:37:58.600081   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:37:58.600107   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:37:58.600119   80243 node_conditions.go:105] duration metric: took 3.197181ms to run NodePressure ...
	I0612 21:37:58.600134   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:37:58.911963   80243 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918455   80243 kubeadm.go:733] kubelet initialised
	I0612 21:37:58.918475   80243 kubeadm.go:734] duration metric: took 6.490654ms waiting for restarted kubelet to initialise ...
	I0612 21:37:58.918482   80243 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:37:58.924427   80243 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.930290   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930329   80243 pod_ready.go:81] duration metric: took 5.86525ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.930339   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.930346   80243 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.935394   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935416   80243 pod_ready.go:81] duration metric: took 5.061639ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.935426   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.935431   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.940238   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940268   80243 pod_ready.go:81] duration metric: took 4.829842ms for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.940286   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.940295   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:58.989649   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989686   80243 pod_ready.go:81] duration metric: took 49.380431ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:58.989702   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:58.989711   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.389868   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389903   80243 pod_ready.go:81] duration metric: took 400.174877ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.389912   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-proxy-8lrgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.389918   80243 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:37:59.790398   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790425   80243 pod_ready.go:81] duration metric: took 400.499157ms for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	E0612 21:37:59.790435   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:37:59.790449   80243 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:00.189506   80243 pod_ready.go:97] node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189533   80243 pod_ready.go:81] duration metric: took 399.075983ms for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:00.189551   80243 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-376087" hosting pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:00.189559   80243 pod_ready.go:38] duration metric: took 1.271068537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:00.189574   80243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:38:00.201480   80243 ops.go:34] apiserver oom_adj: -16
	I0612 21:38:00.201504   80243 kubeadm.go:591] duration metric: took 8.806697524s to restartPrimaryControlPlane
	I0612 21:38:00.201514   80243 kubeadm.go:393] duration metric: took 8.860579681s to StartCluster
	I0612 21:38:00.201536   80243 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.201601   80243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:00.203106   80243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:00.203416   80243 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.80 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:38:00.205568   80243 out.go:177] * Verifying Kubernetes components...
	I0612 21:38:00.203448   80243 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:38:00.203614   80243 config.go:182] Loaded profile config "default-k8s-diff-port-376087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:00.207110   80243 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207120   80243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:00.207120   80243 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207143   80243 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-376087"
	I0612 21:38:00.207166   80243 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207193   80243 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:38:00.207187   80243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-376087"
	I0612 21:38:00.207208   80243 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.207222   80243 addons.go:243] addon metrics-server should already be in state true
	I0612 21:38:00.207230   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207263   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.207490   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207511   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207519   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207544   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.207553   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.207572   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.222521   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0612 21:38:00.222979   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.223496   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.223523   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.223899   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.224519   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.224555   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.227511   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0612 21:38:00.227543   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33041
	I0612 21:38:00.227874   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.227930   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.228402   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228409   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.228426   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228471   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.228776   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228780   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.228952   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.229291   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.229323   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.232640   80243 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-376087"
	W0612 21:38:00.232662   80243 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:38:00.232690   80243 host.go:66] Checking if "default-k8s-diff-port-376087" exists ...
	I0612 21:38:00.233072   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.233103   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.240883   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38355
	I0612 21:38:00.241363   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.241839   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.241861   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.242217   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.242434   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.244544   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.244604   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0612 21:38:00.246924   80243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:38:00.244915   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.248406   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:38:00.248430   80243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:38:00.248451   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.248861   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.248887   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.249211   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.249431   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.251070   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.251137   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0612 21:38:00.252729   80243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:00.251644   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.252033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.252601   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.254033   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.254079   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.254111   80243 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.254127   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:38:00.254148   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.254211   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.254399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.254515   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.254542   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.254712   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.254926   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.256878   80243 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:00.256948   80243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:00.257836   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258073   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.258105   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.258767   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.258993   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.259141   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.259283   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.272822   80243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I0612 21:38:00.273238   80243 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:00.273710   80243 main.go:141] libmachine: Using API Version  1
	I0612 21:38:00.273734   80243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:00.274221   80243 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:00.274411   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetState
	I0612 21:38:00.276056   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .DriverName
	I0612 21:38:00.276286   80243 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.276302   80243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:38:00.276323   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHHostname
	I0612 21:38:00.279285   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279351   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:75:58", ip: ""} in network mk-default-k8s-diff-port-376087: {Iface:virbr3 ExpiryTime:2024-06-12 22:37:35 +0000 UTC Type:0 Mac:52:54:00:01:75:58 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:default-k8s-diff-port-376087 Clientid:01:52:54:00:01:75:58}
	I0612 21:38:00.279400   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | domain default-k8s-diff-port-376087 has defined IP address 192.168.61.80 and MAC address 52:54:00:01:75:58 in network mk-default-k8s-diff-port-376087
	I0612 21:38:00.279516   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHPort
	I0612 21:38:00.279675   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHKeyPath
	I0612 21:38:00.279809   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .GetSSHUsername
	I0612 21:38:00.279940   80243 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/default-k8s-diff-port-376087/id_rsa Username:docker}
	I0612 21:38:00.392656   80243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:00.411972   80243 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:00.502108   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:38:00.504572   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:38:00.504590   80243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:38:00.522021   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:38:00.522057   80243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:38:00.538366   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:38:00.541981   80243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:00.541999   80243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:38:00.561335   80243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:38:01.519955   80243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017815416s)
	I0612 21:38:01.520006   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520019   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520087   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520100   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520312   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520334   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520343   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520350   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520422   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520435   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520444   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.520452   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.520554   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520573   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.520647   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.520678   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.520680   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.528807   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.528827   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.529143   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.529162   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.529166   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556376   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556399   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.556701   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) DBG | Closing plugin on server side
	I0612 21:38:01.556750   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.556762   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.556780   80243 main.go:141] libmachine: Making call to close driver server
	I0612 21:38:01.556791   80243 main.go:141] libmachine: (default-k8s-diff-port-376087) Calling .Close
	I0612 21:38:01.557157   80243 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:38:01.557179   80243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:38:01.557190   80243 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-376087"
	I0612 21:38:01.559103   80243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:37:59.844024   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:37:59.844481   80404 main.go:141] libmachine: (embed-certs-591460) DBG | unable to find current IP address of domain embed-certs-591460 in network mk-embed-certs-591460
	I0612 21:37:59.844505   80404 main.go:141] libmachine: (embed-certs-591460) DBG | I0612 21:37:59.844433   81627 retry.go:31] will retry after 3.77902453s: waiting for machine to come up
	I0612 21:38:03.626861   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627380   80404 main.go:141] libmachine: (embed-certs-591460) Found IP for machine: 192.168.39.147
	I0612 21:38:03.627399   80404 main.go:141] libmachine: (embed-certs-591460) Reserving static IP address...
	I0612 21:38:03.627416   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has current primary IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.627917   80404 main.go:141] libmachine: (embed-certs-591460) Reserved static IP address: 192.168.39.147
	I0612 21:38:03.627964   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.627981   80404 main.go:141] libmachine: (embed-certs-591460) Waiting for SSH to be available...
	I0612 21:38:03.628011   80404 main.go:141] libmachine: (embed-certs-591460) DBG | skip adding static IP to network mk-embed-certs-591460 - found existing host DHCP lease matching {name: "embed-certs-591460", mac: "52:54:00:41:f7:d9", ip: "192.168.39.147"}
	I0612 21:38:03.628030   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Getting to WaitForSSH function...
	I0612 21:38:03.630082   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630480   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.630581   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.630762   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH client type: external
	I0612 21:38:03.630802   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa (-rw-------)
	I0612 21:38:03.630846   80404 main.go:141] libmachine: (embed-certs-591460) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:03.630872   80404 main.go:141] libmachine: (embed-certs-591460) DBG | About to run SSH command:
	I0612 21:38:03.630882   80404 main.go:141] libmachine: (embed-certs-591460) DBG | exit 0
	I0612 21:38:03.755304   80404 main.go:141] libmachine: (embed-certs-591460) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:03.755720   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetConfigRaw
	I0612 21:38:03.756310   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:03.758608   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.758927   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.758966   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.759153   80404 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/config.json ...
	I0612 21:38:03.759390   80404 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:03.759414   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:03.759641   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.761954   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762215   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.762244   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.762371   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.762525   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762689   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.762842   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.762995   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.763183   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.763206   80404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:03.867900   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:03.867936   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868185   80404 buildroot.go:166] provisioning hostname "embed-certs-591460"
	I0612 21:38:03.868210   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:03.868430   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.871347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871690   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.871721   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.871816   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.871977   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872130   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.872258   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.872408   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.872588   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.872604   80404 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-591460 && echo "embed-certs-591460" | sudo tee /etc/hostname
	I0612 21:38:03.990526   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-591460
	
	I0612 21:38:03.990550   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:03.993057   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993458   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:03.993485   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:03.993646   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:03.993830   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.993985   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:03.994125   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:03.994297   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:03.994499   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:03.994524   80404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-591460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-591460/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-591460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:04.120595   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:04.120623   80404 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:04.120640   80404 buildroot.go:174] setting up certificates
	I0612 21:38:04.120650   80404 provision.go:84] configureAuth start
	I0612 21:38:04.120658   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetMachineName
	I0612 21:38:04.120910   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.123483   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.123854   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.123879   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.124153   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.126901   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127293   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.127318   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.127494   80404 provision.go:143] copyHostCerts
	I0612 21:38:04.127554   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:04.127566   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:04.127635   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:04.127736   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:04.127747   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:04.127785   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:04.127860   80404 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:04.127870   80404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:04.127896   80404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:04.127960   80404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.embed-certs-591460 san=[127.0.0.1 192.168.39.147 embed-certs-591460 localhost minikube]
	I0612 21:38:04.265296   80404 provision.go:177] copyRemoteCerts
	I0612 21:38:04.265361   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:04.265392   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.267703   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268044   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.268090   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.268244   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.268421   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.268583   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.268780   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.349440   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:04.374868   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0612 21:38:04.398419   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:04.423319   80404 provision.go:87] duration metric: took 302.657777ms to configureAuth
	I0612 21:38:04.423353   80404 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:04.423514   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:04.423586   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.426301   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426612   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.426641   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.426796   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.426971   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427186   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.427331   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.427553   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.427723   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.427739   80404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:04.689161   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:04.689199   80404 machine.go:97] duration metric: took 929.790838ms to provisionDockerMachine
	I0612 21:38:04.689212   80404 start.go:293] postStartSetup for "embed-certs-591460" (driver="kvm2")
	I0612 21:38:04.689223   80404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:04.689242   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.689569   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:04.689616   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.692484   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.692841   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.692864   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.693002   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.693191   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.693326   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.693469   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.923975   80762 start.go:364] duration metric: took 4m11.963543792s to acquireMachinesLock for "old-k8s-version-983302"
	I0612 21:38:04.924056   80762 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:04.924068   80762 fix.go:54] fixHost starting: 
	I0612 21:38:04.924507   80762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:04.924543   80762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:04.942022   80762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0612 21:38:04.942428   80762 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:04.942891   80762 main.go:141] libmachine: Using API Version  1
	I0612 21:38:04.942917   80762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:04.943345   80762 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:04.943553   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:04.943726   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetState
	I0612 21:38:04.945403   80762 fix.go:112] recreateIfNeeded on old-k8s-version-983302: state=Stopped err=<nil>
	I0612 21:38:04.945427   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	W0612 21:38:04.945581   80762 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:04.947672   80762 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-983302" ...
	I0612 21:38:01.560387   80243 addons.go:510] duration metric: took 1.356939902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:38:02.416070   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.416451   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:04.774287   80404 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:04.778568   80404 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:04.778596   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:04.778667   80404 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:04.778740   80404 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:04.778819   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:04.788602   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:04.813969   80404 start.go:296] duration metric: took 124.741162ms for postStartSetup
	I0612 21:38:04.814020   80404 fix.go:56] duration metric: took 19.717527303s for fixHost
	I0612 21:38:04.814049   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.816907   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817268   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.817294   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.817511   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.817728   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.817905   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.818087   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.818293   80404 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:04.818501   80404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0612 21:38:04.818516   80404 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:04.923846   80404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228284.879920542
	
	I0612 21:38:04.923868   80404 fix.go:216] guest clock: 1718228284.879920542
	I0612 21:38:04.923874   80404 fix.go:229] Guest: 2024-06-12 21:38:04.879920542 +0000 UTC Remote: 2024-06-12 21:38:04.814026698 +0000 UTC m=+300.152179547 (delta=65.893844ms)
	I0612 21:38:04.923890   80404 fix.go:200] guest clock delta is within tolerance: 65.893844ms
	I0612 21:38:04.923894   80404 start.go:83] releasing machines lock for "embed-certs-591460", held for 19.827427255s
	I0612 21:38:04.923920   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.924155   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:04.926708   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927102   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.927146   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.927281   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927788   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.927955   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:38:04.928043   80404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:04.928099   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.928158   80404 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:04.928182   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:38:04.930931   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931237   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931377   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931415   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931561   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:04.931587   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:04.931592   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931742   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:38:04.931790   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.931916   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:38:04.932111   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:04.932127   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:38:04.932250   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:38:05.009184   80404 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:05.035746   80404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:05.181527   80404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:05.189035   80404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:05.189113   80404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:05.205860   80404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:05.205886   80404 start.go:494] detecting cgroup driver to use...
	I0612 21:38:05.205957   80404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:05.223913   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:05.239598   80404 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:05.239679   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:05.253501   80404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:05.268094   80404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:05.397260   80404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:05.560454   80404 docker.go:233] disabling docker service ...
	I0612 21:38:05.560532   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:05.579197   80404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:05.593420   80404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:05.728145   80404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:05.860041   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:05.876025   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:05.895242   80404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:05.895336   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.906575   80404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:05.906662   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.918248   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.929178   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.942169   80404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:05.953542   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.969045   80404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:05.989509   80404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:06.001532   80404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:06.012676   80404 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:06.012740   80404 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:06.030028   80404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:06.048168   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:06.190039   80404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:06.349088   80404 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:06.349151   80404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:06.355251   80404 start.go:562] Will wait 60s for crictl version
	I0612 21:38:06.355321   80404 ssh_runner.go:195] Run: which crictl
	I0612 21:38:06.359456   80404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:06.400450   80404 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:06.400525   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.430078   80404 ssh_runner.go:195] Run: crio --version
	I0612 21:38:06.461616   80404 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:04.949078   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .Start
	I0612 21:38:04.949226   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring networks are active...
	I0612 21:38:04.949936   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network default is active
	I0612 21:38:04.950371   80762 main.go:141] libmachine: (old-k8s-version-983302) Ensuring network mk-old-k8s-version-983302 is active
	I0612 21:38:04.950813   80762 main.go:141] libmachine: (old-k8s-version-983302) Getting domain xml...
	I0612 21:38:04.951549   80762 main.go:141] libmachine: (old-k8s-version-983302) Creating domain...
	I0612 21:38:06.296150   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting to get IP...
	I0612 21:38:06.296978   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.297465   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.297570   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.297453   81824 retry.go:31] will retry after 256.609938ms: waiting for machine to come up
	I0612 21:38:06.556307   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.556935   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.556967   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.556884   81824 retry.go:31] will retry after 285.754887ms: waiting for machine to come up
	I0612 21:38:06.844674   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:06.845227   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:06.845255   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:06.845171   81824 retry.go:31] will retry after 326.266367ms: waiting for machine to come up
	I0612 21:38:07.172788   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.173414   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.173447   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.173353   81824 retry.go:31] will retry after 393.443927ms: waiting for machine to come up
	I0612 21:38:07.568084   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:07.568645   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:07.568673   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:07.568609   81824 retry.go:31] will retry after 726.66775ms: waiting for machine to come up
	I0612 21:38:06.462860   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetIP
	I0612 21:38:06.466111   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466521   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:38:06.466551   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:38:06.466837   80404 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:06.471361   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:06.485595   80404 kubeadm.go:877] updating cluster {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:06.485718   80404 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:06.485761   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:06.528708   80404 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:06.528778   80404 ssh_runner.go:195] Run: which lz4
	I0612 21:38:06.533340   80404 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:06.538076   80404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:06.538115   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0612 21:38:08.078495   80404 crio.go:462] duration metric: took 1.545201872s to copy over tarball
	I0612 21:38:08.078573   80404 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:06.917632   80243 node_ready.go:53] node "default-k8s-diff-port-376087" has status "Ready":"False"
	I0612 21:38:07.916734   80243 node_ready.go:49] node "default-k8s-diff-port-376087" has status "Ready":"True"
	I0612 21:38:07.916763   80243 node_ready.go:38] duration metric: took 7.504763576s for node "default-k8s-diff-port-376087" to be "Ready" ...
	I0612 21:38:07.916775   80243 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:07.924249   80243 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931751   80243 pod_ready.go:92] pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.931773   80243 pod_ready.go:81] duration metric: took 7.493608ms for pod "coredns-7db6d8ff4d-cllsk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.931782   80243 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937804   80243 pod_ready.go:92] pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:07.937880   80243 pod_ready.go:81] duration metric: took 6.090191ms for pod "etcd-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:07.937904   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:09.944927   80243 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:08.296811   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.297295   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.297319   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.297250   81824 retry.go:31] will retry after 658.540746ms: waiting for machine to come up
	I0612 21:38:08.957164   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:08.957611   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:08.957635   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:08.957576   81824 retry.go:31] will retry after 921.725713ms: waiting for machine to come up
	I0612 21:38:09.880881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:09.881672   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:09.881703   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:09.881604   81824 retry.go:31] will retry after 1.355846361s: waiting for machine to come up
	I0612 21:38:11.238616   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:11.239058   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:11.239094   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:11.238996   81824 retry.go:31] will retry after 1.3469357s: waiting for machine to come up
	I0612 21:38:12.587245   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:12.587747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:12.587785   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:12.587683   81824 retry.go:31] will retry after 1.616666063s: waiting for machine to come up
	I0612 21:38:10.426384   80404 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.347778968s)
	I0612 21:38:10.426418   80404 crio.go:469] duration metric: took 2.347893056s to extract the tarball
	I0612 21:38:10.426427   80404 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:10.472235   80404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:10.522846   80404 crio.go:514] all images are preloaded for cri-o runtime.
	I0612 21:38:10.522869   80404 cache_images.go:84] Images are preloaded, skipping loading
	I0612 21:38:10.522876   80404 kubeadm.go:928] updating node { 192.168.39.147 8443 v1.30.1 crio true true} ...
	I0612 21:38:10.523007   80404 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-591460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:10.523163   80404 ssh_runner.go:195] Run: crio config
	I0612 21:38:10.577165   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:10.577193   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:10.577209   80404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:10.577244   80404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-591460 NodeName:embed-certs-591460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:38:10.577400   80404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-591460"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:10.577479   80404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:38:10.587499   80404 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:10.587573   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:10.597410   80404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0612 21:38:10.614617   80404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:10.632222   80404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0612 21:38:10.649693   80404 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:10.653639   80404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:10.666501   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:10.802679   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:10.820975   80404 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460 for IP: 192.168.39.147
	I0612 21:38:10.821001   80404 certs.go:194] generating shared ca certs ...
	I0612 21:38:10.821022   80404 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:10.821187   80404 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:10.821233   80404 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:10.821243   80404 certs.go:256] generating profile certs ...
	I0612 21:38:10.821326   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/client.key
	I0612 21:38:10.821402   80404 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key.3b2e21e0
	I0612 21:38:10.821440   80404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key
	I0612 21:38:10.821575   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:10.821616   80404 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:10.821626   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:10.821655   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:10.821706   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:10.821751   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:10.821812   80404 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:10.822621   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:10.879261   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:10.924352   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:10.961294   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:10.993792   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0612 21:38:11.039515   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:11.063161   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:11.086759   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/embed-certs-591460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:38:11.109693   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:11.133083   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:11.155716   80404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:11.181860   80404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:11.199989   80404 ssh_runner.go:195] Run: openssl version
	I0612 21:38:11.205811   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:11.216640   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221692   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.221754   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:11.227829   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:11.239918   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:11.251648   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256123   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.256176   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:11.261880   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:11.273184   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:11.284832   80404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289679   80404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.289732   80404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:11.295338   80404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:11.306317   80404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:11.310737   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:11.320403   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:11.327756   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:11.333976   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:11.340200   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:11.346386   80404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:11.352268   80404 kubeadm.go:391] StartCluster: {Name:embed-certs-591460 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-591460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:11.352385   80404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:11.352435   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.390802   80404 cri.go:89] found id: ""
	I0612 21:38:11.390870   80404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:11.402604   80404 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:11.402626   80404 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:11.402630   80404 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:11.402682   80404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:11.413636   80404 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:11.414999   80404 kubeconfig.go:125] found "embed-certs-591460" server: "https://192.168.39.147:8443"
	I0612 21:38:11.417654   80404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:11.427456   80404 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.147
	I0612 21:38:11.427496   80404 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:11.427509   80404 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:11.427559   80404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:11.462135   80404 cri.go:89] found id: ""
	I0612 21:38:11.462211   80404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:11.478193   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:11.488816   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:11.488838   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:11.488899   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:11.498079   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:11.498154   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:11.508044   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:11.519721   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:11.519785   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:11.529554   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.538699   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:11.538750   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:11.548154   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:11.559980   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:11.560053   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:11.569737   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:11.579812   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:11.703454   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.773142   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069644541s)
	I0612 21:38:12.773183   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:12.991458   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.080268   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:13.207751   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:13.207934   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:13.708672   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.208389   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:14.268408   80404 api_server.go:72] duration metric: took 1.060631955s to wait for apiserver process to appear ...
	I0612 21:38:14.268443   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:38:14.268464   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:14.269096   80404 api_server.go:269] stopped: https://192.168.39.147:8443/healthz: Get "https://192.168.39.147:8443/healthz": dial tcp 192.168.39.147:8443: connect: connection refused
	I0612 21:38:10.445507   80243 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.445530   80243 pod_ready.go:81] duration metric: took 2.50760731s for pod "kube-apiserver-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.445542   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450290   80243 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.450310   80243 pod_ready.go:81] duration metric: took 4.759656ms for pod "kube-controller-manager-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.450323   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454909   80243 pod_ready.go:92] pod "kube-proxy-8lrgv" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:10.454940   80243 pod_ready.go:81] duration metric: took 4.597123ms for pod "kube-proxy-8lrgv" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:10.454951   80243 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:12.587416   80243 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:13.505858   80243 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:13.505884   80243 pod_ready.go:81] duration metric: took 3.050925673s for pod "kube-scheduler-default-k8s-diff-port-376087" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:13.505896   80243 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:14.206281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:14.206781   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:14.206810   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:14.206716   81824 retry.go:31] will retry after 2.057638604s: waiting for machine to come up
	I0612 21:38:16.266372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:16.266920   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:16.266955   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:16.266858   81824 retry.go:31] will retry after 2.387834661s: waiting for machine to come up
	I0612 21:38:14.769114   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.056504   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.056539   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.056557   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.075356   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:38:17.075391   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:38:17.268731   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.277080   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.277111   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:17.768638   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:17.773438   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:38:17.773464   80404 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:38:18.269037   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:38:18.273939   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:38:18.286895   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:38:18.286922   80404 api_server.go:131] duration metric: took 4.018473342s to wait for apiserver health ...
	I0612 21:38:18.286931   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:38:18.286937   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:18.288955   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:38:18.290619   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:38:18.305334   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:38:18.336590   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:38:18.351276   80404 system_pods.go:59] 8 kube-system pods found
	I0612 21:38:18.351320   80404 system_pods.go:61] "coredns-7db6d8ff4d-z99cq" [575689b8-3c51-45c8-874c-481e4b9db39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:38:18.351331   80404 system_pods.go:61] "etcd-embed-certs-591460" [190c1552-6bca-41f2-9ea9-e415e1ae9406] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:38:18.351342   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [c0fed28f-1d80-44eb-a66a-3a5b36704882] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:38:18.351350   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [79758f2a-2517-4a76-a3ae-536ac3adf781] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:38:18.351357   80404 system_pods.go:61] "kube-proxy-79kz5" [74ddb284-7cb2-46ec-ab9f-246dbfa0c4ec] Running
	I0612 21:38:18.351372   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [d9916521-fcc1-4bf1-8b03-8a5553f07bd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:38:18.351383   80404 system_pods.go:61] "metrics-server-569cc877fc-bkhxn" [f78482c8-82ea-4dbd-999f-2e4c73c98b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:38:18.351396   80404 system_pods.go:61] "storage-provisioner" [b3b117f7-ac44-4430-afb4-c6991ce1b71d] Running
	I0612 21:38:18.351407   80404 system_pods.go:74] duration metric: took 14.792966ms to wait for pod list to return data ...
	I0612 21:38:18.351419   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:38:18.357736   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:38:18.357769   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:38:18.357786   80404 node_conditions.go:105] duration metric: took 6.360028ms to run NodePressure ...
	I0612 21:38:18.357805   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:18.634312   80404 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638679   80404 kubeadm.go:733] kubelet initialised
	I0612 21:38:18.638700   80404 kubeadm.go:734] duration metric: took 4.362243ms waiting for restarted kubelet to initialise ...
	I0612 21:38:18.638706   80404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:38:18.643840   80404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.648561   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648585   80404 pod_ready.go:81] duration metric: took 4.721795ms for pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.648597   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "coredns-7db6d8ff4d-z99cq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.648606   80404 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.654013   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654036   80404 pod_ready.go:81] duration metric: took 5.419602ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.654046   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "etcd-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.654054   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.659445   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659468   80404 pod_ready.go:81] duration metric: took 5.404211ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.659479   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.659487   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:18.741451   80404 pod_ready.go:97] node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741480   80404 pod_ready.go:81] duration metric: took 81.981354ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	E0612 21:38:18.741489   80404 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-591460" hosting pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-591460" has status "Ready":"False"
	I0612 21:38:18.741495   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140710   80404 pod_ready.go:92] pod "kube-proxy-79kz5" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:19.140734   80404 pod_ready.go:81] duration metric: took 399.230349ms for pod "kube-proxy-79kz5" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:19.140744   80404 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:15.513300   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.013924   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:20.024841   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:18.656575   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:18.657074   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | unable to find current IP address of domain old-k8s-version-983302 in network mk-old-k8s-version-983302
	I0612 21:38:18.657111   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | I0612 21:38:18.657022   81824 retry.go:31] will retry after 3.518256927s: waiting for machine to come up
	I0612 21:38:22.176416   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176901   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has current primary IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.176930   80762 main.go:141] libmachine: (old-k8s-version-983302) Found IP for machine: 192.168.50.81
	I0612 21:38:22.176965   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserving static IP address...
	I0612 21:38:22.177385   80762 main.go:141] libmachine: (old-k8s-version-983302) Reserved static IP address: 192.168.50.81
	I0612 21:38:22.177422   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.177435   80762 main.go:141] libmachine: (old-k8s-version-983302) Waiting for SSH to be available...
	I0612 21:38:22.177459   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | skip adding static IP to network mk-old-k8s-version-983302 - found existing host DHCP lease matching {name: "old-k8s-version-983302", mac: "52:54:00:7b:c8:d2", ip: "192.168.50.81"}
	I0612 21:38:22.177471   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Getting to WaitForSSH function...
	I0612 21:38:22.179728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180130   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.180158   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.180273   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH client type: external
	I0612 21:38:22.180334   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa (-rw-------)
	I0612 21:38:22.180368   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:22.180387   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | About to run SSH command:
	I0612 21:38:22.180399   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | exit 0
	I0612 21:38:22.308621   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:22.308979   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetConfigRaw
	I0612 21:38:22.309620   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.312747   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313124   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.313155   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.313421   80762 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/config.json ...
	I0612 21:38:22.313635   80762 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:22.313658   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:22.313884   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.316476   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.316961   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.317014   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.317221   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.317408   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317600   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.317775   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.317955   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.318195   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.318207   80762 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:22.431693   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:22.431728   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.431979   80762 buildroot.go:166] provisioning hostname "old-k8s-version-983302"
	I0612 21:38:22.432006   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.432191   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.434830   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435267   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.435300   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.435431   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.435598   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435718   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.435826   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.436056   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.436237   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.436252   80762 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-983302 && echo "old-k8s-version-983302" | sudo tee /etc/hostname
	I0612 21:38:22.563119   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-983302
	
	I0612 21:38:22.563184   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.565915   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566281   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.566315   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.566513   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.566704   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.566885   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.567021   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.567243   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:22.567463   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:22.567490   80762 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-983302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-983302/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-983302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:22.690443   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:22.690474   80762 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:22.690494   80762 buildroot.go:174] setting up certificates
	I0612 21:38:22.690504   80762 provision.go:84] configureAuth start
	I0612 21:38:22.690514   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetMachineName
	I0612 21:38:22.690774   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:22.693186   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693528   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.693576   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.693689   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.695948   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696285   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.696318   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.696432   80762 provision.go:143] copyHostCerts
	I0612 21:38:22.696501   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:22.696521   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:22.696583   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:22.696662   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:22.696671   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:22.696693   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:22.696774   80762 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:22.696784   80762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:22.696803   80762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:22.696847   80762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-983302 san=[127.0.0.1 192.168.50.81 localhost minikube old-k8s-version-983302]
	I0612 21:38:23.576378   80157 start.go:364] duration metric: took 53.730674695s to acquireMachinesLock for "no-preload-087875"
	I0612 21:38:23.576429   80157 start.go:96] Skipping create...Using existing machine configuration
	I0612 21:38:23.576436   80157 fix.go:54] fixHost starting: 
	I0612 21:38:23.576844   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:38:23.576875   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:38:23.594879   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I0612 21:38:23.595284   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:38:23.595811   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:38:23.595836   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:38:23.596201   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:38:23.596404   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:23.596559   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:38:23.598372   80157 fix.go:112] recreateIfNeeded on no-preload-087875: state=Stopped err=<nil>
	I0612 21:38:23.598399   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	W0612 21:38:23.598558   80157 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 21:38:23.600649   80157 out.go:177] * Restarting existing kvm2 VM for "no-preload-087875" ...
	I0612 21:38:21.147354   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.147393   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:22.863618   80762 provision.go:177] copyRemoteCerts
	I0612 21:38:22.863672   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:22.863698   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:22.866979   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867371   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:22.867403   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:22.867548   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:22.867734   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:22.867904   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:22.868126   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:22.958350   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 21:38:22.984409   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:23.009623   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0612 21:38:23.038026   80762 provision.go:87] duration metric: took 347.510898ms to configureAuth
	I0612 21:38:23.038063   80762 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:23.038309   80762 config.go:182] Loaded profile config "old-k8s-version-983302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:38:23.038390   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.041196   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041634   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.041660   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.041842   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.042044   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042222   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.042410   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.042580   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.042780   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.042799   80762 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:23.324862   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:23.324893   80762 machine.go:97] duration metric: took 1.01124225s to provisionDockerMachine
	I0612 21:38:23.324904   80762 start.go:293] postStartSetup for "old-k8s-version-983302" (driver="kvm2")
	I0612 21:38:23.324913   80762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:23.324928   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.325240   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:23.325274   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.328007   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328343   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.328372   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.328578   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.328770   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.328939   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.329068   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.416040   80762 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:23.420586   80762 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:23.420607   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:23.420674   80762 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:23.420739   80762 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:23.420823   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:23.432266   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:23.460619   80762 start.go:296] duration metric: took 135.703593ms for postStartSetup
	I0612 21:38:23.460661   80762 fix.go:56] duration metric: took 18.536593686s for fixHost
	I0612 21:38:23.460684   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.463415   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463745   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.463780   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.463909   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.464110   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464248   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.464378   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.464533   80762 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:23.464742   80762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0612 21:38:23.464754   80762 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:23.576211   80762 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228303.539451044
	
	I0612 21:38:23.576231   80762 fix.go:216] guest clock: 1718228303.539451044
	I0612 21:38:23.576239   80762 fix.go:229] Guest: 2024-06-12 21:38:23.539451044 +0000 UTC Remote: 2024-06-12 21:38:23.460665921 +0000 UTC m=+270.637213069 (delta=78.785123ms)
	I0612 21:38:23.576285   80762 fix.go:200] guest clock delta is within tolerance: 78.785123ms
	I0612 21:38:23.576291   80762 start.go:83] releasing machines lock for "old-k8s-version-983302", held for 18.65227368s
	I0612 21:38:23.576316   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.576617   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:23.579493   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.579881   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.579913   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.580120   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580693   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580865   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .DriverName
	I0612 21:38:23.580952   80762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:23.581005   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.581111   80762 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:23.581141   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHHostname
	I0612 21:38:23.584053   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584262   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584443   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584479   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584587   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584690   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:23.584728   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:23.584757   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.584855   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHPort
	I0612 21:38:23.584918   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.584980   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHKeyPath
	I0612 21:38:23.585067   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.585115   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetSSHUsername
	I0612 21:38:23.585227   80762 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/old-k8s-version-983302/id_rsa Username:docker}
	I0612 21:38:23.666055   80762 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:23.688409   80762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:23.848030   80762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:23.855302   80762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:23.855383   80762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:23.874362   80762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:23.874389   80762 start.go:494] detecting cgroup driver to use...
	I0612 21:38:23.874461   80762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:23.893239   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:23.909774   80762 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:23.909844   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:23.926084   80762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:23.943341   80762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:24.072731   80762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:24.244551   80762 docker.go:233] disabling docker service ...
	I0612 21:38:24.244624   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:24.261862   80762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:24.277051   80762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:24.426146   80762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:24.560634   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:24.575339   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:24.595965   80762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0612 21:38:24.596043   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.607814   80762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:24.607892   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.619001   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.630982   80762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:24.644326   80762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:24.658640   80762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:24.673944   80762 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:24.673994   80762 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:24.693853   80762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:24.709251   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:24.856222   80762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:25.023760   80762 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:25.023842   80762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:25.029449   80762 start.go:562] Will wait 60s for crictl version
	I0612 21:38:25.029522   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:25.033750   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:25.080911   80762 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:25.081018   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.111727   80762 ssh_runner.go:195] Run: crio --version
	I0612 21:38:25.145999   80762 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0612 21:38:22.512748   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:24.515486   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:23.602119   80157 main.go:141] libmachine: (no-preload-087875) Calling .Start
	I0612 21:38:23.602319   80157 main.go:141] libmachine: (no-preload-087875) Ensuring networks are active...
	I0612 21:38:23.603167   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network default is active
	I0612 21:38:23.603533   80157 main.go:141] libmachine: (no-preload-087875) Ensuring network mk-no-preload-087875 is active
	I0612 21:38:23.603887   80157 main.go:141] libmachine: (no-preload-087875) Getting domain xml...
	I0612 21:38:23.604617   80157 main.go:141] libmachine: (no-preload-087875) Creating domain...
	I0612 21:38:24.978550   80157 main.go:141] libmachine: (no-preload-087875) Waiting to get IP...
	I0612 21:38:24.979551   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:24.979945   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:24.980007   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:24.979925   81986 retry.go:31] will retry after 224.557195ms: waiting for machine to come up
	I0612 21:38:25.206441   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.206928   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.206957   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.206875   81986 retry.go:31] will retry after 361.682908ms: waiting for machine to come up
	I0612 21:38:25.570564   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.571139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.571184   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.571089   81986 retry.go:31] will retry after 328.335873ms: waiting for machine to come up
	I0612 21:38:25.901471   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:25.902020   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:25.902054   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:25.901953   81986 retry.go:31] will retry after 505.408325ms: waiting for machine to come up
	I0612 21:38:26.408636   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:26.409139   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:26.409167   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:26.409091   81986 retry.go:31] will retry after 749.519426ms: waiting for machine to come up
	I0612 21:38:27.160100   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.160563   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.160611   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.160537   81986 retry.go:31] will retry after 641.037463ms: waiting for machine to come up
	I0612 21:38:25.147420   80762 main.go:141] libmachine: (old-k8s-version-983302) Calling .GetIP
	I0612 21:38:25.151029   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151402   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c8:d2", ip: ""} in network mk-old-k8s-version-983302: {Iface:virbr2 ExpiryTime:2024-06-12 22:38:16 +0000 UTC Type:0 Mac:52:54:00:7b:c8:d2 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:old-k8s-version-983302 Clientid:01:52:54:00:7b:c8:d2}
	I0612 21:38:25.151432   80762 main.go:141] libmachine: (old-k8s-version-983302) DBG | domain old-k8s-version-983302 has defined IP address 192.168.50.81 and MAC address 52:54:00:7b:c8:d2 in network mk-old-k8s-version-983302
	I0612 21:38:25.151726   80762 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:25.156561   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:25.171243   80762 kubeadm.go:877] updating cluster {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:25.171386   80762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 21:38:25.171429   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:25.225872   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:25.225936   80762 ssh_runner.go:195] Run: which lz4
	I0612 21:38:25.230447   80762 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 21:38:25.235452   80762 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 21:38:25.235485   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0612 21:38:27.033962   80762 crio.go:462] duration metric: took 1.803565745s to copy over tarball
	I0612 21:38:27.034045   80762 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 21:38:25.149629   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.651785   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:26.516743   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:29.013751   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:27.803722   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:27.804278   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:27.804316   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:27.804252   81986 retry.go:31] will retry after 1.184505978s: waiting for machine to come up
	I0612 21:38:28.990221   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:28.990736   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:28.990763   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:28.990709   81986 retry.go:31] will retry after 1.061139219s: waiting for machine to come up
	I0612 21:38:30.054187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:30.054768   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:30.054805   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:30.054718   81986 retry.go:31] will retry after 1.621121981s: waiting for machine to come up
	I0612 21:38:31.677355   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:31.677938   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:31.677966   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:31.677890   81986 retry.go:31] will retry after 2.17746309s: waiting for machine to come up
	I0612 21:38:30.212028   80762 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177947965s)
	I0612 21:38:30.212073   80762 crio.go:469] duration metric: took 3.178080815s to extract the tarball
	I0612 21:38:30.212085   80762 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 21:38:30.256957   80762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:30.297891   80762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0612 21:38:30.297917   80762 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:30.298025   80762 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.298045   80762 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.298055   80762 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.298021   80762 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0612 21:38:30.298106   80762 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.298062   80762 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.298004   80762 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.298079   80762 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0612 21:38:30.299842   80762 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.299848   80762 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.299843   80762 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:30.299866   80762 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.299876   80762 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.299905   80762 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.299755   80762 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.466739   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0612 21:38:30.516078   80762 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0612 21:38:30.516127   80762 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0612 21:38:30.516174   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.520362   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0612 21:38:30.545437   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.563320   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0612 21:38:30.599110   80762 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0612 21:38:30.599155   80762 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.599217   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.603578   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0612 21:38:30.639450   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0612 21:38:30.649462   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.650602   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.652555   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.656970   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.672136   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.766185   80762 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0612 21:38:30.766233   80762 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.766279   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.778901   80762 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0612 21:38:30.778946   80762 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.778952   80762 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0612 21:38:30.778983   80762 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.778994   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.779041   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.793610   80762 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0612 21:38:30.793650   80762 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.793698   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.807451   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0612 21:38:30.807482   80762 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0612 21:38:30.807518   80762 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.807458   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0612 21:38:30.807518   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0612 21:38:30.807557   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0612 21:38:30.807559   80762 ssh_runner.go:195] Run: which crictl
	I0612 21:38:30.916470   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0612 21:38:30.916564   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0612 21:38:30.916576   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0612 21:38:30.916603   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0612 21:38:30.916646   80762 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0612 21:38:30.953152   80762 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0612 21:38:31.194046   80762 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:31.341827   80762 cache_images.go:92] duration metric: took 1.043891497s to LoadCachedImages
	W0612 21:38:31.341922   80762 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0612 21:38:31.341937   80762 kubeadm.go:928] updating node { 192.168.50.81 8443 v1.20.0 crio true true} ...
	I0612 21:38:31.342064   80762 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-983302 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:38:31.342154   80762 ssh_runner.go:195] Run: crio config
	I0612 21:38:31.395673   80762 cni.go:84] Creating CNI manager for ""
	I0612 21:38:31.395706   80762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:38:31.395722   80762 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:38:31.395744   80762 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-983302 NodeName:old-k8s-version-983302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0612 21:38:31.395918   80762 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-983302"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:38:31.395995   80762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0612 21:38:31.410706   80762 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:38:31.410785   80762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:38:31.425161   80762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0612 21:38:31.445883   80762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:38:31.463605   80762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0612 21:38:31.482797   80762 ssh_runner.go:195] Run: grep 192.168.50.81	control-plane.minikube.internal$ /etc/hosts
	I0612 21:38:31.486974   80762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:31.499681   80762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:31.645490   80762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:38:31.668769   80762 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302 for IP: 192.168.50.81
	I0612 21:38:31.668797   80762 certs.go:194] generating shared ca certs ...
	I0612 21:38:31.668820   80762 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:31.668987   80762 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:38:31.669061   80762 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:38:31.669088   80762 certs.go:256] generating profile certs ...
	I0612 21:38:31.669212   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/client.key
	I0612 21:38:31.669309   80762 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key.1098c83c
	I0612 21:38:31.669373   80762 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key
	I0612 21:38:31.669548   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:38:31.669598   80762 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:38:31.669613   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:38:31.669662   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:38:31.669723   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:38:31.669759   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:38:31.669830   80762 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:31.670835   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:38:31.717330   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:38:31.754900   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:38:31.798099   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:38:31.839647   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0612 21:38:31.883454   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 21:38:31.920765   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:38:31.953069   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/old-k8s-version-983302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0612 21:38:31.978134   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:38:32.002475   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:38:32.027784   80762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:38:32.053563   80762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:38:32.074493   80762 ssh_runner.go:195] Run: openssl version
	I0612 21:38:32.080620   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:38:32.093531   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098615   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.098688   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:38:32.104777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:38:32.116551   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:38:32.130188   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135197   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.135279   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:38:32.142777   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:38:32.156051   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:38:32.169866   80762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175249   80762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.175340   80762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:38:32.181561   80762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:38:32.193430   80762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:38:32.198235   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:38:32.204654   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:38:32.210771   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:38:32.216966   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:38:32.223203   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:38:32.230990   80762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:38:32.237290   80762 kubeadm.go:391] StartCluster: {Name:old-k8s-version-983302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-983302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:38:32.237446   80762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:38:32.237503   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.282436   80762 cri.go:89] found id: ""
	I0612 21:38:32.282516   80762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:38:32.295283   80762 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:38:32.295313   80762 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:38:32.295321   80762 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:38:32.295400   80762 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:38:32.307483   80762 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:38:32.308555   80762 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-983302" does not appear in /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:38:32.309335   80762 kubeconfig.go:62] /home/jenkins/minikube-integration/17779-14199/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-983302" cluster setting kubeconfig missing "old-k8s-version-983302" context setting]
	I0612 21:38:32.310486   80762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:38:32.397524   80762 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:38:32.411765   80762 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.81
	I0612 21:38:32.411797   80762 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:38:32.411807   80762 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:38:32.411849   80762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:38:32.460009   80762 cri.go:89] found id: ""
	I0612 21:38:32.460078   80762 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:38:32.481670   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:38:32.493664   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:38:32.493684   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:38:32.493734   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:38:32.503974   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:38:32.504044   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:38:32.515971   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:38:32.525772   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:38:32.525832   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:38:32.537137   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.548539   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:38:32.548600   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:38:32.560401   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:38:32.570608   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:38:32.570681   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:38:32.582763   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:38:32.594407   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:32.734633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:30.151681   80404 pod_ready.go:102] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.658859   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:38:31.658881   80404 pod_ready.go:81] duration metric: took 12.518130926s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:31.658890   80404 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	I0612 21:38:33.666360   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:31.357093   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:33.857141   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:33.857675   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:33.857702   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:33.857648   81986 retry.go:31] will retry after 2.485654549s: waiting for machine to come up
	I0612 21:38:36.344611   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:36.345117   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:36.345148   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:36.345075   81986 retry.go:31] will retry after 3.560063035s: waiting for machine to come up
	I0612 21:38:33.526337   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.768139   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.896716   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:38:33.986708   80762 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:38:33.986832   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.487194   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:34.987580   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.486966   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.487534   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:36.987526   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:37.487035   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:35.669161   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:35.513787   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:38.011903   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:39.907588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:39.908051   80157 main.go:141] libmachine: (no-preload-087875) DBG | unable to find current IP address of domain no-preload-087875 in network mk-no-preload-087875
	I0612 21:38:39.908110   80157 main.go:141] libmachine: (no-preload-087875) DBG | I0612 21:38:39.907994   81986 retry.go:31] will retry after 4.524521166s: waiting for machine to come up
	I0612 21:38:37.986904   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.487262   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:38.986907   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.486895   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:39.987060   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.487385   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.987049   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.487325   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:41.987550   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:42.487225   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:40.665078   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.665731   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.666653   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:40.512741   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:42.513175   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:45.013451   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:44.434330   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434850   80157 main.go:141] libmachine: (no-preload-087875) Found IP for machine: 192.168.72.63
	I0612 21:38:44.434883   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has current primary IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.434893   80157 main.go:141] libmachine: (no-preload-087875) Reserving static IP address...
	I0612 21:38:44.435324   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.435358   80157 main.go:141] libmachine: (no-preload-087875) Reserved static IP address: 192.168.72.63
	I0612 21:38:44.435378   80157 main.go:141] libmachine: (no-preload-087875) DBG | skip adding static IP to network mk-no-preload-087875 - found existing host DHCP lease matching {name: "no-preload-087875", mac: "52:54:00:6b:a2:aa", ip: "192.168.72.63"}
	I0612 21:38:44.435388   80157 main.go:141] libmachine: (no-preload-087875) Waiting for SSH to be available...
	I0612 21:38:44.435397   80157 main.go:141] libmachine: (no-preload-087875) DBG | Getting to WaitForSSH function...
	I0612 21:38:44.437881   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438196   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.438218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.438385   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH client type: external
	I0612 21:38:44.438414   80157 main.go:141] libmachine: (no-preload-087875) DBG | Using SSH private key: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa (-rw-------)
	I0612 21:38:44.438452   80157 main.go:141] libmachine: (no-preload-087875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0612 21:38:44.438469   80157 main.go:141] libmachine: (no-preload-087875) DBG | About to run SSH command:
	I0612 21:38:44.438489   80157 main.go:141] libmachine: (no-preload-087875) DBG | exit 0
	I0612 21:38:44.571149   80157 main.go:141] libmachine: (no-preload-087875) DBG | SSH cmd err, output: <nil>: 
	I0612 21:38:44.571499   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetConfigRaw
	I0612 21:38:44.572172   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.574754   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575142   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.575187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.575406   80157 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/config.json ...
	I0612 21:38:44.575580   80157 machine.go:94] provisionDockerMachine start ...
	I0612 21:38:44.575595   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:44.575825   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.578584   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579008   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.579030   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.579214   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.579394   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579534   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.579684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.579924   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.580096   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.580109   80157 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 21:38:44.691573   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 21:38:44.691609   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.691890   80157 buildroot.go:166] provisioning hostname "no-preload-087875"
	I0612 21:38:44.691914   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.692120   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.695218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695697   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.695729   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.695783   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.695986   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696200   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.696383   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.696572   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.696776   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.696794   80157 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-087875 && echo "no-preload-087875" | sudo tee /etc/hostname
	I0612 21:38:44.821857   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-087875
	
	I0612 21:38:44.821893   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.824821   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825263   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.825295   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.825523   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:44.825740   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.825912   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:44.826024   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:44.826187   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:44.826406   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:44.826430   80157 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-087875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-087875/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-087875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 21:38:44.948871   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 21:38:44.948904   80157 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17779-14199/.minikube CaCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17779-14199/.minikube}
	I0612 21:38:44.948930   80157 buildroot.go:174] setting up certificates
	I0612 21:38:44.948941   80157 provision.go:84] configureAuth start
	I0612 21:38:44.948954   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetMachineName
	I0612 21:38:44.949247   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:44.952166   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952511   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.952538   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.952662   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:44.955149   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955483   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:44.955505   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:44.955658   80157 provision.go:143] copyHostCerts
	I0612 21:38:44.955731   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem, removing ...
	I0612 21:38:44.955743   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem
	I0612 21:38:44.955807   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/ca.pem (1078 bytes)
	I0612 21:38:44.955929   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem, removing ...
	I0612 21:38:44.955942   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem
	I0612 21:38:44.955975   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/cert.pem (1123 bytes)
	I0612 21:38:44.956052   80157 exec_runner.go:144] found /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem, removing ...
	I0612 21:38:44.956059   80157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem
	I0612 21:38:44.956078   80157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17779-14199/.minikube/key.pem (1679 bytes)
	I0612 21:38:44.956125   80157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem org=jenkins.no-preload-087875 san=[127.0.0.1 192.168.72.63 localhost minikube no-preload-087875]
	I0612 21:38:45.138701   80157 provision.go:177] copyRemoteCerts
	I0612 21:38:45.138758   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 21:38:45.138781   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.141540   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142011   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.142055   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.142199   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.142457   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.142603   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.142765   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.234480   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 21:38:45.259043   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0612 21:38:45.290511   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 21:38:45.316377   80157 provision.go:87] duration metric: took 367.423709ms to configureAuth
	I0612 21:38:45.316403   80157 buildroot.go:189] setting minikube options for container-runtime
	I0612 21:38:45.316607   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:38:45.316684   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.319596   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320160   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.320187   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.320384   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.320598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320778   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.320973   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.321203   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.321368   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.321387   80157 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0612 21:38:45.611478   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0612 21:38:45.611511   80157 machine.go:97] duration metric: took 1.035919707s to provisionDockerMachine
	I0612 21:38:45.611523   80157 start.go:293] postStartSetup for "no-preload-087875" (driver="kvm2")
	I0612 21:38:45.611533   80157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 21:38:45.611556   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.611843   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 21:38:45.611862   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.615071   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615542   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.615582   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.615715   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.615889   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.616028   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.616204   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.707710   80157 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 21:38:45.712155   80157 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 21:38:45.712177   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/addons for local assets ...
	I0612 21:38:45.712235   80157 filesync.go:126] Scanning /home/jenkins/minikube-integration/17779-14199/.minikube/files for local assets ...
	I0612 21:38:45.712301   80157 filesync.go:149] local asset: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem -> 214442.pem in /etc/ssl/certs
	I0612 21:38:45.712386   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 21:38:45.722654   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:38:45.747626   80157 start.go:296] duration metric: took 136.091584ms for postStartSetup
	I0612 21:38:45.747666   80157 fix.go:56] duration metric: took 22.171227252s for fixHost
	I0612 21:38:45.747685   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.750588   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.750972   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.750999   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.751231   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.751443   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751598   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.751773   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.752005   80157 main.go:141] libmachine: Using SSH client type: native
	I0612 21:38:45.752181   80157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.63 22 <nil> <nil>}
	I0612 21:38:45.752195   80157 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 21:38:45.864042   80157 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228325.837473906
	
	I0612 21:38:45.864068   80157 fix.go:216] guest clock: 1718228325.837473906
	I0612 21:38:45.864079   80157 fix.go:229] Guest: 2024-06-12 21:38:45.837473906 +0000 UTC Remote: 2024-06-12 21:38:45.747669277 +0000 UTC m=+358.493088442 (delta=89.804629ms)
	I0612 21:38:45.864106   80157 fix.go:200] guest clock delta is within tolerance: 89.804629ms
	I0612 21:38:45.864114   80157 start.go:83] releasing machines lock for "no-preload-087875", held for 22.287706082s
	I0612 21:38:45.864152   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.864448   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:45.867230   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867603   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.867633   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.867768   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868293   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868453   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:38:45.868535   80157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 21:38:45.868575   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.868663   80157 ssh_runner.go:195] Run: cat /version.json
	I0612 21:38:45.868681   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:38:45.871218   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871489   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871678   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.871719   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.871915   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872061   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:45.872085   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872109   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:45.872240   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:38:45.872246   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872522   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:38:45.872529   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.872692   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:38:45.872868   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:38:45.953249   80157 ssh_runner.go:195] Run: systemctl --version
	I0612 21:38:45.976778   80157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0612 21:38:46.124511   80157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 21:38:46.130509   80157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 21:38:46.130575   80157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 21:38:46.149670   80157 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 21:38:46.149691   80157 start.go:494] detecting cgroup driver to use...
	I0612 21:38:46.149755   80157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 21:38:46.167865   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 21:38:46.182896   80157 docker.go:217] disabling cri-docker service (if available) ...
	I0612 21:38:46.182951   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0612 21:38:46.197058   80157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0612 21:38:46.211517   80157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0612 21:38:46.331986   80157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0612 21:38:46.500675   80157 docker.go:233] disabling docker service ...
	I0612 21:38:46.500745   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0612 21:38:46.516858   80157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0612 21:38:46.530617   80157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0612 21:38:46.674917   80157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0612 21:38:46.810090   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0612 21:38:46.825079   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 21:38:46.843895   80157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0612 21:38:46.843963   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.854170   80157 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0612 21:38:46.854245   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.864699   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.875057   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.886063   80157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 21:38:46.897688   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.908984   80157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.926803   80157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0612 21:38:46.939373   80157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 21:38:46.948868   80157 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0612 21:38:46.948922   80157 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0612 21:38:46.963593   80157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 21:38:46.973735   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:38:47.108669   80157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0612 21:38:47.249938   80157 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0612 21:38:47.250044   80157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0612 21:38:47.255480   80157 start.go:562] Will wait 60s for crictl version
	I0612 21:38:47.255556   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.259730   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 21:38:47.303074   80157 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0612 21:38:47.303187   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.332225   80157 ssh_runner.go:195] Run: crio --version
	I0612 21:38:47.363628   80157 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0612 21:38:42.987579   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.487465   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:43.987265   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.487935   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:44.987399   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.487793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:45.986898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.486985   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:46.986848   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.486947   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:47.164573   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.165711   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.512195   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:49.512366   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:47.365068   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetIP
	I0612 21:38:47.367703   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368079   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:38:47.368103   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:38:47.368325   80157 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0612 21:38:47.372608   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:38:47.386411   80157 kubeadm.go:877] updating cluster {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 21:38:47.386750   80157 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 21:38:47.386796   80157 ssh_runner.go:195] Run: sudo crictl images --output json
	I0612 21:38:47.422165   80157 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0612 21:38:47.422189   80157 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0612 21:38:47.422227   80157 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.422280   80157 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.422355   80157 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.422370   80157 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.422311   80157 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.422347   80157 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.422318   80157 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0612 21:38:47.422599   80157 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.423599   80157 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0612 21:38:47.423610   80157 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.423612   80157 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.423630   80157 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:47.423626   80157 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.423699   80157 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.423737   80157 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.423720   80157 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.556807   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0612 21:38:47.557424   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.561887   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.569402   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.571880   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.576879   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.587848   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.759890   80157 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0612 21:38:47.759926   80157 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.759947   80157 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0612 21:38:47.759973   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.759976   80157 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0612 21:38:47.760006   80157 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.760015   80157 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0612 21:38:47.759977   80157 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.760061   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760063   80157 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.760075   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760073   80157 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0612 21:38:47.760091   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.760101   80157 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.760164   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.766878   80157 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0612 21:38:47.766905   80157 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.766943   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.777168   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 21:38:47.777197   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0612 21:38:47.778459   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0612 21:38:47.778414   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0612 21:38:47.779057   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0612 21:38:47.882668   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0612 21:38:47.882770   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.902416   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0612 21:38:47.902532   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:47.917388   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0612 21:38:47.917417   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0612 21:38:47.917473   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0612 21:38:47.917501   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:47.917528   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:47.917545   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:47.917500   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0612 21:38:47.917558   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917594   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0612 21:38:47.917502   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:47.917559   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0612 21:38:47.929251   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0612 21:38:47.929299   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0612 21:38:47.929308   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0612 21:38:48.312589   80157 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.713720   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1: (2.796151375s)
	I0612 21:38:50.713767   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.796263274s)
	I0612 21:38:50.713901   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0612 21:38:50.713877   80157 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.401254109s)
	I0612 21:38:50.713921   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.713966   80157 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0612 21:38:50.713987   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0612 21:38:50.714017   80157 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:50.714063   80157 ssh_runner.go:195] Run: which crictl
	I0612 21:38:47.987863   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.487299   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:48.986886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.486972   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:49.987859   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.487034   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:50.987724   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.486948   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:52.487668   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:51.665638   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.665855   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:51.512765   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:54.011870   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:53.169682   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.455668553s)
	I0612 21:38:53.169705   80157 ssh_runner.go:235] Completed: which crictl: (2.455619981s)
	I0612 21:38:53.169714   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0612 21:38:53.169741   80157 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.169759   80157 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:38:53.169784   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0612 21:38:53.216895   80157 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0612 21:38:53.217020   80157 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:38:57.220343   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.050521066s)
	I0612 21:38:57.220376   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0612 21:38:57.220397   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220444   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0612 21:38:57.220443   80157 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (4.003396955s)
	I0612 21:38:57.220487   80157 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0612 21:38:52.987635   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.487500   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:53.987860   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.487855   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:54.986868   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.487259   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:55.987902   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.487535   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.987269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:57.487542   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:56.166299   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.665085   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:56.012847   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.557142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:38:58.682288   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.46182102s)
	I0612 21:38:58.682313   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0612 21:38:58.682337   80157 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:38:58.682376   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0612 21:39:00.576373   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.893964365s)
	I0612 21:39:00.576412   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0612 21:39:00.576443   80157 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:39:00.576504   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0612 21:38:57.987222   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.486976   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:58.986913   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.487269   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:38:59.987289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.987690   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.487283   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:01.987541   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:02.487589   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:00.667732   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.165317   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:01.012684   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:03.015111   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:02.445930   80157 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.86940281s)
	I0612 21:39:02.445960   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0612 21:39:02.445994   80157 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:02.446071   80157 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0612 21:39:03.393330   80157 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17779-14199/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0612 21:39:03.393375   80157 cache_images.go:123] Successfully loaded all cached images
	I0612 21:39:03.393382   80157 cache_images.go:92] duration metric: took 15.9711807s to LoadCachedImages
	I0612 21:39:03.393397   80157 kubeadm.go:928] updating node { 192.168.72.63 8443 v1.30.1 crio true true} ...
	I0612 21:39:03.393543   80157 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-087875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 21:39:03.393658   80157 ssh_runner.go:195] Run: crio config
	I0612 21:39:03.448859   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:03.448884   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:03.448901   80157 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 21:39:03.448930   80157 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.63 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-087875 NodeName:no-preload-087875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 21:39:03.449103   80157 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-087875"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 21:39:03.449181   80157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 21:39:03.462756   80157 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 21:39:03.462825   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 21:39:03.472653   80157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0612 21:39:03.491567   80157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 21:39:03.509239   80157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0612 21:39:03.527802   80157 ssh_runner.go:195] Run: grep 192.168.72.63	control-plane.minikube.internal$ /etc/hosts
	I0612 21:39:03.531523   80157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 21:39:03.543748   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:39:03.666376   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:39:03.683563   80157 certs.go:68] Setting up /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875 for IP: 192.168.72.63
	I0612 21:39:03.683587   80157 certs.go:194] generating shared ca certs ...
	I0612 21:39:03.683606   80157 certs.go:226] acquiring lock for ca certs: {Name:mk0acb420384e68f188900634721a8b628172b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:39:03.683766   80157 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key
	I0612 21:39:03.683816   80157 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key
	I0612 21:39:03.683831   80157 certs.go:256] generating profile certs ...
	I0612 21:39:03.683927   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/client.key
	I0612 21:39:03.684010   80157 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key.13709275
	I0612 21:39:03.684066   80157 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key
	I0612 21:39:03.684217   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem (1338 bytes)
	W0612 21:39:03.684259   80157 certs.go:480] ignoring /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444_empty.pem, impossibly tiny 0 bytes
	I0612 21:39:03.684272   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca-key.pem (1679 bytes)
	I0612 21:39:03.684318   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/ca.pem (1078 bytes)
	I0612 21:39:03.684364   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/cert.pem (1123 bytes)
	I0612 21:39:03.684395   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/certs/key.pem (1679 bytes)
	I0612 21:39:03.684455   80157 certs.go:484] found cert: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem (1708 bytes)
	I0612 21:39:03.685098   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 21:39:03.732817   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 21:39:03.771449   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 21:39:03.800774   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0612 21:39:03.831845   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 21:39:03.862000   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 21:39:03.901036   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 21:39:03.925025   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/no-preload-087875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 21:39:03.950862   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 21:39:03.974222   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/certs/21444.pem --> /usr/share/ca-certificates/21444.pem (1338 bytes)
	I0612 21:39:04.002698   80157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/ssl/certs/214442.pem --> /usr/share/ca-certificates/214442.pem (1708 bytes)
	I0612 21:39:04.028173   80157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 21:39:04.044685   80157 ssh_runner.go:195] Run: openssl version
	I0612 21:39:04.050600   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 21:39:04.061893   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066371   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.066424   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 21:39:04.072463   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 21:39:04.083929   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21444.pem && ln -fs /usr/share/ca-certificates/21444.pem /etc/ssl/certs/21444.pem"
	I0612 21:39:04.094777   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099380   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:24 /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.099435   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21444.pem
	I0612 21:39:04.105125   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21444.pem /etc/ssl/certs/51391683.0"
	I0612 21:39:04.116191   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/214442.pem && ln -fs /usr/share/ca-certificates/214442.pem /etc/ssl/certs/214442.pem"
	I0612 21:39:04.127408   80157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132234   80157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:24 /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.132315   80157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/214442.pem
	I0612 21:39:04.138401   80157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/214442.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 21:39:04.149542   80157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 21:39:04.154133   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 21:39:04.160171   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 21:39:04.166410   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 21:39:04.172650   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 21:39:04.178506   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 21:39:04.184375   80157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 21:39:04.190412   80157 kubeadm.go:391] StartCluster: {Name:no-preload-087875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-087875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 21:39:04.190524   80157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0612 21:39:04.190584   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.235297   80157 cri.go:89] found id: ""
	I0612 21:39:04.235362   80157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0612 21:39:04.246400   80157 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 21:39:04.246429   80157 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 21:39:04.246449   80157 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 21:39:04.246499   80157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 21:39:04.257137   80157 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:39:04.258277   80157 kubeconfig.go:125] found "no-preload-087875" server: "https://192.168.72.63:8443"
	I0612 21:39:04.260656   80157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 21:39:04.270637   80157 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.63
	I0612 21:39:04.270666   80157 kubeadm.go:1154] stopping kube-system containers ...
	I0612 21:39:04.270675   80157 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0612 21:39:04.270730   80157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0612 21:39:04.316487   80157 cri.go:89] found id: ""
	I0612 21:39:04.316550   80157 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 21:39:04.334814   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:39:04.346430   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:39:04.346451   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:39:04.346500   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:39:04.356362   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:39:04.356417   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:39:04.366999   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:39:04.378005   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:39:04.378061   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:39:04.388052   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.397130   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:39:04.397185   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:39:04.407053   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:39:04.416338   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:39:04.416395   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:39:04.426475   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:39:04.436852   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:04.565452   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.461610   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.676493   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.767236   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:05.870855   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:39:05.870960   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.372034   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.871680   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.906242   80157 api_server.go:72] duration metric: took 1.035387498s to wait for apiserver process to appear ...
	I0612 21:39:06.906273   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:39:06.906296   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:06.906883   80157 api_server.go:269] stopped: https://192.168.72.63:8443/healthz: Get "https://192.168.72.63:8443/healthz": dial tcp 192.168.72.63:8443: connect: connection refused
	I0612 21:39:02.987853   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.487382   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:03.987303   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.487852   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:04.987464   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.486928   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.987660   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.487208   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:06.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:07.487497   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:05.166502   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.665452   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:09.665766   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:05.512792   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:08.012392   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:10.014073   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:07.407227   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.589285   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 21:39:09.589319   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 21:39:09.589336   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.726716   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.726753   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:09.907032   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:09.917718   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:09.917746   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.406997   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.412127   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 21:39:10.412156   80157 api_server.go:103] status: https://192.168.72.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 21:39:10.906700   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:39:10.911262   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:39:10.918778   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:39:10.918813   80157 api_server.go:131] duration metric: took 4.012531107s to wait for apiserver health ...
	I0612 21:39:10.918824   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:39:10.918832   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:39:10.921012   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:39:10.922401   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:39:10.948209   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:39:10.974530   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:39:10.986054   80157 system_pods.go:59] 8 kube-system pods found
	I0612 21:39:10.986091   80157 system_pods.go:61] "coredns-7db6d8ff4d-sh68b" [17691219-bfda-443b-8049-e6e966aadb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 21:39:10.986102   80157 system_pods.go:61] "etcd-no-preload-087875" [3048b12a-4354-45fd-99c7-d2a84035e102] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 21:39:10.986114   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [0f39a5fd-1a64-479f-bb28-c19bc10b7ed3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 21:39:10.986127   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [62cc49b8-b05f-4371-aa17-bea17d08d2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 21:39:10.986141   80157 system_pods.go:61] "kube-proxy-htv9h" [e3eb4693-7896-4dd2-98b8-91f06b028a1e] Running
	I0612 21:39:10.986158   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [ef833b9d-75ca-43bd-b196-30594775b174] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 21:39:10.986170   80157 system_pods.go:61] "metrics-server-569cc877fc-d5mj6" [79ba2aad-c942-4162-b69a-5c7dd138a618] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:39:10.986178   80157 system_pods.go:61] "storage-provisioner" [5793c778-1a5c-4cfe-924a-b85b72df53cd] Running
	I0612 21:39:10.986187   80157 system_pods.go:74] duration metric: took 11.634011ms to wait for pod list to return data ...
	I0612 21:39:10.986199   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:39:10.992801   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:39:10.992843   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:39:10.992856   80157 node_conditions.go:105] duration metric: took 6.648025ms to run NodePressure ...
	I0612 21:39:10.992878   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 21:39:11.263413   80157 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271758   80157 kubeadm.go:733] kubelet initialised
	I0612 21:39:11.271781   80157 kubeadm.go:734] duration metric: took 8.347232ms waiting for restarted kubelet to initialise ...
	I0612 21:39:11.271789   80157 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:39:11.277940   80157 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:07.987732   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.486974   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:08.986873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.486941   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:09.986929   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.487754   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:10.987685   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.486910   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:11.987457   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.486873   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:12.165604   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.166986   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.029928   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:14.512085   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:13.287555   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:15.786345   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:12.987394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.486915   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:13.987880   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.486881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:14.986951   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.487462   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:15.986850   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.487213   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.987066   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:17.487882   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:16.666123   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.666354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:16.512936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:19.013463   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:18.285110   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:20.788396   80157 pod_ready.go:102] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.284869   80157 pod_ready.go:92] pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:21.284902   80157 pod_ready.go:81] duration metric: took 10.006929439s for pod "coredns-7db6d8ff4d-sh68b" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:21.284916   80157 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:17.987273   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:18.987836   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.487622   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:19.987381   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.487005   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:20.987638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.487670   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.987552   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:22.487438   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:21.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.665272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:21.512836   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:24.014108   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:23.291502   80157 pod_ready.go:102] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:25.791813   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.791842   80157 pod_ready.go:81] duration metric: took 4.506916362s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.791854   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796901   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.796928   80157 pod_ready.go:81] duration metric: took 5.066599ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.796939   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801550   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.801571   80157 pod_ready.go:81] duration metric: took 4.624771ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.801580   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806178   80157 pod_ready.go:92] pod "kube-proxy-htv9h" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.806195   80157 pod_ready.go:81] duration metric: took 4.609956ms for pod "kube-proxy-htv9h" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.806204   80157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809883   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:39:25.809902   80157 pod_ready.go:81] duration metric: took 3.691999ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:25.809914   80157 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	I0612 21:39:22.987165   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.487122   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:23.987804   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.487583   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:24.987647   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.487126   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.987251   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.486996   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:26.987044   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:27.486911   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:25.668272   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:28.164809   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:26.513220   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:29.013047   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.817352   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:30.315600   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:27.987822   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.487496   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:28.987166   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.487892   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:29.987787   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.487315   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.987933   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.487255   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:31.987793   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:32.487881   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:30.165900   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.167795   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.665939   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:31.013473   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:33.015281   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.316680   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:34.317063   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:36.816905   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:32.987267   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.487678   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:33.987296   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:33.987371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:34.028670   80762 cri.go:89] found id: ""
	I0612 21:39:34.028699   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.028710   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:34.028717   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:34.028778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:34.068371   80762 cri.go:89] found id: ""
	I0612 21:39:34.068400   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.068412   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:34.068419   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:34.068485   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:34.104605   80762 cri.go:89] found id: ""
	I0612 21:39:34.104634   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.104643   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:34.104650   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:34.104745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:34.150301   80762 cri.go:89] found id: ""
	I0612 21:39:34.150327   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.150335   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:34.150341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:34.150396   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:34.191426   80762 cri.go:89] found id: ""
	I0612 21:39:34.191462   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.191475   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:34.191484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:34.191562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:34.228483   80762 cri.go:89] found id: ""
	I0612 21:39:34.228523   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.228535   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:34.228543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:34.228653   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:34.262834   80762 cri.go:89] found id: ""
	I0612 21:39:34.262863   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.262873   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:34.262881   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:34.262944   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:34.298283   80762 cri.go:89] found id: ""
	I0612 21:39:34.298312   80762 logs.go:276] 0 containers: []
	W0612 21:39:34.298321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:34.298330   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:34.298340   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:34.350889   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:34.350918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:34.365264   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:34.365289   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:34.508130   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:34.508162   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:34.508180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:34.572036   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:34.572076   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.114371   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:37.127410   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:37.127492   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:37.168684   80762 cri.go:89] found id: ""
	I0612 21:39:37.168705   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.168714   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:37.168723   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:37.168798   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:37.208765   80762 cri.go:89] found id: ""
	I0612 21:39:37.208797   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.208808   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:37.208815   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:37.208875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:37.266245   80762 cri.go:89] found id: ""
	I0612 21:39:37.266270   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.266277   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:37.266283   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:37.266331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:37.313557   80762 cri.go:89] found id: ""
	I0612 21:39:37.313586   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.313597   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:37.313606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:37.313677   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:37.353292   80762 cri.go:89] found id: ""
	I0612 21:39:37.353318   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.353325   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:37.353332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:37.353389   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:37.391940   80762 cri.go:89] found id: ""
	I0612 21:39:37.391974   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.391984   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:37.392015   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:37.392078   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:37.432133   80762 cri.go:89] found id: ""
	I0612 21:39:37.432154   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.432166   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:37.432174   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:37.432228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:37.468274   80762 cri.go:89] found id: ""
	I0612 21:39:37.468302   80762 logs.go:276] 0 containers: []
	W0612 21:39:37.468310   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:37.468328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:37.468347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:37.543904   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:37.543941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:37.586957   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:37.586982   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:37.641247   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:37.641288   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:37.657076   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:37.657101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:37.729279   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:37.165427   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.166383   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:35.512174   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:37.513222   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.012806   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:39.317119   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:41.817268   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:40.229638   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:40.243825   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:40.243889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:40.282795   80762 cri.go:89] found id: ""
	I0612 21:39:40.282821   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.282829   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:40.282834   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:40.282879   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:40.320211   80762 cri.go:89] found id: ""
	I0612 21:39:40.320236   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.320246   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:40.320252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:40.320338   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:40.356270   80762 cri.go:89] found id: ""
	I0612 21:39:40.356292   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.356300   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:40.356306   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:40.356353   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:40.394667   80762 cri.go:89] found id: ""
	I0612 21:39:40.394691   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.394699   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:40.394704   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:40.394751   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:40.432765   80762 cri.go:89] found id: ""
	I0612 21:39:40.432794   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.432804   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:40.432811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:40.432883   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:40.472347   80762 cri.go:89] found id: ""
	I0612 21:39:40.472386   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.472406   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:40.472414   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:40.472477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:40.508414   80762 cri.go:89] found id: ""
	I0612 21:39:40.508445   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.508456   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:40.508464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:40.508521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:40.546938   80762 cri.go:89] found id: ""
	I0612 21:39:40.546964   80762 logs.go:276] 0 containers: []
	W0612 21:39:40.546972   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:40.546981   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:40.546993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:40.621356   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:40.621380   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:40.621398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:40.703830   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:40.703865   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:40.744915   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:40.744965   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:40.798883   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:40.798920   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:41.167469   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.667403   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:42.512351   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.512639   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:44.317053   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.317350   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:43.315905   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:43.330150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:43.330221   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:43.377307   80762 cri.go:89] found id: ""
	I0612 21:39:43.377337   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.377347   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:43.377362   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:43.377426   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:43.412608   80762 cri.go:89] found id: ""
	I0612 21:39:43.412638   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.412648   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:43.412654   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:43.412718   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:43.446716   80762 cri.go:89] found id: ""
	I0612 21:39:43.446746   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.446755   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:43.446762   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:43.446823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:43.484607   80762 cri.go:89] found id: ""
	I0612 21:39:43.484636   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.484647   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:43.484655   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:43.484700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:43.522400   80762 cri.go:89] found id: ""
	I0612 21:39:43.522427   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.522438   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:43.522445   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:43.522529   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:43.559121   80762 cri.go:89] found id: ""
	I0612 21:39:43.559147   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.559163   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:43.559211   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:43.559292   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:43.595886   80762 cri.go:89] found id: ""
	I0612 21:39:43.595919   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.595937   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:43.595945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:43.596011   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:43.638549   80762 cri.go:89] found id: ""
	I0612 21:39:43.638573   80762 logs.go:276] 0 containers: []
	W0612 21:39:43.638583   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:43.638594   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:43.638609   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:43.705300   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:43.705338   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:43.723246   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:43.723281   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:43.807735   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:43.807760   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:43.807870   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:43.882971   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:43.883017   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.421476   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:46.434447   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:46.434532   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:46.470710   80762 cri.go:89] found id: ""
	I0612 21:39:46.470745   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.470758   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:46.470765   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:46.470828   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:46.504843   80762 cri.go:89] found id: ""
	I0612 21:39:46.504871   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.504878   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:46.504884   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:46.504941   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:46.542937   80762 cri.go:89] found id: ""
	I0612 21:39:46.542965   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.542973   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:46.542979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:46.543035   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:46.581098   80762 cri.go:89] found id: ""
	I0612 21:39:46.581124   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.581133   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:46.581143   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:46.581189   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:46.617289   80762 cri.go:89] found id: ""
	I0612 21:39:46.617319   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.617329   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:46.617337   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:46.617402   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:46.651012   80762 cri.go:89] found id: ""
	I0612 21:39:46.651045   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.651057   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:46.651070   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:46.651141   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:46.688344   80762 cri.go:89] found id: ""
	I0612 21:39:46.688370   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.688379   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:46.688388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:46.688451   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:46.724349   80762 cri.go:89] found id: ""
	I0612 21:39:46.724374   80762 logs.go:276] 0 containers: []
	W0612 21:39:46.724382   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:46.724390   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:46.724404   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:46.797866   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:46.797894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:46.797912   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:46.887520   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:46.887557   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:46.928143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:46.928182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:46.981416   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:46.981451   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:46.164845   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.166925   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:46.513519   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.016041   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:48.816335   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:50.816407   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:49.497028   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:49.510077   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:49.510147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:49.544313   80762 cri.go:89] found id: ""
	I0612 21:39:49.544349   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.544359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:49.544365   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:49.544416   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:49.580220   80762 cri.go:89] found id: ""
	I0612 21:39:49.580248   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.580256   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:49.580262   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:49.580316   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:49.619582   80762 cri.go:89] found id: ""
	I0612 21:39:49.619607   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.619615   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:49.619620   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:49.619692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:49.656453   80762 cri.go:89] found id: ""
	I0612 21:39:49.656479   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.656487   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:49.656493   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:49.656557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:49.694285   80762 cri.go:89] found id: ""
	I0612 21:39:49.694318   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.694330   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:49.694338   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:49.694417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:49.731100   80762 cri.go:89] found id: ""
	I0612 21:39:49.731127   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.731135   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:49.731140   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:49.731209   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:49.767709   80762 cri.go:89] found id: ""
	I0612 21:39:49.767731   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.767738   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:49.767744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:49.767787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:49.801231   80762 cri.go:89] found id: ""
	I0612 21:39:49.801265   80762 logs.go:276] 0 containers: []
	W0612 21:39:49.801283   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:49.801294   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:49.801309   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:49.848500   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:49.848542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:49.900084   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:49.900121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:49.916208   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:49.916234   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:49.983283   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:49.983310   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:49.983325   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:52.566884   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:52.580400   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:52.580476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:52.615922   80762 cri.go:89] found id: ""
	I0612 21:39:52.615957   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.615970   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:52.615978   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:52.616038   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:52.657316   80762 cri.go:89] found id: ""
	I0612 21:39:52.657348   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.657356   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:52.657362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:52.657417   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:52.692426   80762 cri.go:89] found id: ""
	I0612 21:39:52.692459   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.692470   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:52.692478   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:52.692542   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:52.726800   80762 cri.go:89] found id: ""
	I0612 21:39:52.726835   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.726848   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:52.726856   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:52.726921   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:52.764283   80762 cri.go:89] found id: ""
	I0612 21:39:52.764314   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.764326   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:52.764341   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:52.764395   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:52.802279   80762 cri.go:89] found id: ""
	I0612 21:39:52.802311   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.802324   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:52.802331   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:52.802385   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:52.841433   80762 cri.go:89] found id: ""
	I0612 21:39:52.841466   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.841477   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:52.841484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:52.841546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:50.667322   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.165294   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:51.016137   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:53.019373   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.818876   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.316845   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:52.881417   80762 cri.go:89] found id: ""
	I0612 21:39:52.881441   80762 logs.go:276] 0 containers: []
	W0612 21:39:52.881449   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:52.881457   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:52.881468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:52.936228   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:52.936262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:52.950688   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:52.950718   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:53.025101   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:53.025122   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:53.025138   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:53.114986   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:53.115031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:55.653893   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:55.668983   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:55.669047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:55.708445   80762 cri.go:89] found id: ""
	I0612 21:39:55.708475   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.708486   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:55.708494   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:55.708558   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:55.745158   80762 cri.go:89] found id: ""
	I0612 21:39:55.745185   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.745195   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:55.745204   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:55.745270   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:55.785322   80762 cri.go:89] found id: ""
	I0612 21:39:55.785344   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.785363   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:55.785370   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:55.785442   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:55.822371   80762 cri.go:89] found id: ""
	I0612 21:39:55.822397   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.822408   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:55.822416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:55.822484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:55.856866   80762 cri.go:89] found id: ""
	I0612 21:39:55.856888   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.856895   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:55.856900   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:55.856954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:55.891618   80762 cri.go:89] found id: ""
	I0612 21:39:55.891648   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.891660   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:55.891668   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:55.891731   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:55.927483   80762 cri.go:89] found id: ""
	I0612 21:39:55.927504   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.927513   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:55.927519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:55.927572   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:55.963546   80762 cri.go:89] found id: ""
	I0612 21:39:55.963572   80762 logs.go:276] 0 containers: []
	W0612 21:39:55.963584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:55.963597   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:55.963616   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:56.037421   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:56.037442   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:56.037453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:56.112148   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:56.112185   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:39:56.163359   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:56.163389   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:56.217109   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:56.217144   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:55.166499   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.665517   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.665625   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:55.513267   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.015558   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:57.317149   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:59.320306   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:01.815855   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:39:58.733278   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:39:58.746890   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:39:58.746951   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:39:58.785222   80762 cri.go:89] found id: ""
	I0612 21:39:58.785252   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.785263   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:39:58.785269   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:39:58.785343   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:39:58.824421   80762 cri.go:89] found id: ""
	I0612 21:39:58.824448   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.824455   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:39:58.824461   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:39:58.824521   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:39:58.863626   80762 cri.go:89] found id: ""
	I0612 21:39:58.863658   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.863669   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:39:58.863728   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:39:58.863818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:39:58.904040   80762 cri.go:89] found id: ""
	I0612 21:39:58.904064   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.904073   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:39:58.904080   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:39:58.904147   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:39:58.937508   80762 cri.go:89] found id: ""
	I0612 21:39:58.937543   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.937557   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:39:58.937565   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:39:58.937632   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:39:58.974283   80762 cri.go:89] found id: ""
	I0612 21:39:58.974311   80762 logs.go:276] 0 containers: []
	W0612 21:39:58.974322   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:39:58.974330   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:39:58.974383   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:39:59.009954   80762 cri.go:89] found id: ""
	I0612 21:39:59.009987   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.009999   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:39:59.010007   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:39:59.010072   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:39:59.051911   80762 cri.go:89] found id: ""
	I0612 21:39:59.051935   80762 logs.go:276] 0 containers: []
	W0612 21:39:59.051943   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:39:59.051951   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:39:59.051961   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:39:59.102911   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:39:59.102942   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:39:59.116576   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:39:59.116608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:39:59.189590   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:39:59.189619   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:39:59.189634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:39:59.270192   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:39:59.270232   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.820872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:01.834916   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:01.835000   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:01.870526   80762 cri.go:89] found id: ""
	I0612 21:40:01.870560   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.870572   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:01.870579   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:01.870642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:01.909581   80762 cri.go:89] found id: ""
	I0612 21:40:01.909614   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.909626   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:01.909633   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:01.909727   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:01.947944   80762 cri.go:89] found id: ""
	I0612 21:40:01.947976   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.947988   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:01.947995   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:01.948059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:01.985745   80762 cri.go:89] found id: ""
	I0612 21:40:01.985781   80762 logs.go:276] 0 containers: []
	W0612 21:40:01.985793   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:01.985800   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:01.985860   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:02.023716   80762 cri.go:89] found id: ""
	I0612 21:40:02.023741   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.023749   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:02.023754   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:02.023801   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:02.059136   80762 cri.go:89] found id: ""
	I0612 21:40:02.059168   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.059203   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:02.059212   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:02.059283   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:02.104520   80762 cri.go:89] found id: ""
	I0612 21:40:02.104544   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.104552   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:02.104558   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:02.104618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:02.146130   80762 cri.go:89] found id: ""
	I0612 21:40:02.146164   80762 logs.go:276] 0 containers: []
	W0612 21:40:02.146176   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:02.146187   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:02.146202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:02.199672   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:02.199710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:02.215224   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:02.215256   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:02.290030   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:02.290057   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:02.290072   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:02.374579   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:02.374615   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:01.667390   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.165253   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:00.512229   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:02.513298   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.018848   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:03.816610   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:05.818990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:04.915345   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:04.928323   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:04.928404   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:04.963267   80762 cri.go:89] found id: ""
	I0612 21:40:04.963297   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.963310   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:04.963319   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:04.963386   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:04.998378   80762 cri.go:89] found id: ""
	I0612 21:40:04.998409   80762 logs.go:276] 0 containers: []
	W0612 21:40:04.998420   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:04.998426   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:04.998498   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:05.038094   80762 cri.go:89] found id: ""
	I0612 21:40:05.038118   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.038126   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:05.038132   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:05.038181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:05.074331   80762 cri.go:89] found id: ""
	I0612 21:40:05.074366   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.074379   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:05.074386   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:05.074462   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:05.109332   80762 cri.go:89] found id: ""
	I0612 21:40:05.109359   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.109368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:05.109373   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:05.109423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:05.143875   80762 cri.go:89] found id: ""
	I0612 21:40:05.143908   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.143918   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:05.143926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:05.143990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:05.183695   80762 cri.go:89] found id: ""
	I0612 21:40:05.183724   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.183731   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:05.183737   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:05.183792   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:05.222852   80762 cri.go:89] found id: ""
	I0612 21:40:05.222878   80762 logs.go:276] 0 containers: []
	W0612 21:40:05.222887   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:05.222895   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:05.222907   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:05.262661   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:05.262687   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:05.315563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:05.315593   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:05.332128   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:05.332163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:05.411675   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:05.411699   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:05.411712   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:06.665324   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.667163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.512587   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.012843   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:08.316990   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:10.816093   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:07.991930   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:08.005743   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:08.005807   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:08.041685   80762 cri.go:89] found id: ""
	I0612 21:40:08.041714   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.041724   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:08.041732   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:08.041791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:08.080875   80762 cri.go:89] found id: ""
	I0612 21:40:08.080905   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.080916   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:08.080925   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:08.080993   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:08.117290   80762 cri.go:89] found id: ""
	I0612 21:40:08.117316   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.117323   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:08.117329   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:08.117387   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:08.154345   80762 cri.go:89] found id: ""
	I0612 21:40:08.154376   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.154387   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:08.154395   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:08.154459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:08.192913   80762 cri.go:89] found id: ""
	I0612 21:40:08.192947   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.192957   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:08.192969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:08.193033   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:08.235732   80762 cri.go:89] found id: ""
	I0612 21:40:08.235764   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.235775   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:08.235782   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:08.235853   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:08.274282   80762 cri.go:89] found id: ""
	I0612 21:40:08.274306   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.274314   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:08.274320   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:08.274366   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:08.314585   80762 cri.go:89] found id: ""
	I0612 21:40:08.314608   80762 logs.go:276] 0 containers: []
	W0612 21:40:08.314619   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:08.314628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:08.314641   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:08.331693   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:08.331725   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:08.414541   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:08.414565   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:08.414584   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:08.496428   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:08.496460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:08.546991   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:08.547020   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.099778   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:11.113450   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:11.113539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:11.150426   80762 cri.go:89] found id: ""
	I0612 21:40:11.150451   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.150459   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:11.150464   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:11.150524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:11.189931   80762 cri.go:89] found id: ""
	I0612 21:40:11.189958   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.189967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:11.189972   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:11.190031   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:11.228116   80762 cri.go:89] found id: ""
	I0612 21:40:11.228144   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.228154   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:11.228161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:11.228243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:11.268639   80762 cri.go:89] found id: ""
	I0612 21:40:11.268664   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.268672   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:11.268678   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:11.268723   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:11.306077   80762 cri.go:89] found id: ""
	I0612 21:40:11.306105   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.306116   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:11.306123   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:11.306187   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:11.344360   80762 cri.go:89] found id: ""
	I0612 21:40:11.344388   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.344399   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:11.344418   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:11.344475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:11.382906   80762 cri.go:89] found id: ""
	I0612 21:40:11.382937   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.382948   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:11.382957   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:11.383027   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:11.418388   80762 cri.go:89] found id: ""
	I0612 21:40:11.418419   80762 logs.go:276] 0 containers: []
	W0612 21:40:11.418429   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:11.418439   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:11.418453   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:11.432204   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:11.432241   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:11.508219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:11.508251   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:11.508263   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:11.593021   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:11.593058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:11.634056   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:11.634087   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:11.165384   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:13.170153   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.013303   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.013454   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:12.817129   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:15.316929   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:14.187831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:14.203153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:14.203248   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:14.239693   80762 cri.go:89] found id: ""
	I0612 21:40:14.239716   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.239723   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:14.239729   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:14.239827   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:14.273206   80762 cri.go:89] found id: ""
	I0612 21:40:14.273234   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.273244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:14.273251   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:14.273313   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:14.315512   80762 cri.go:89] found id: ""
	I0612 21:40:14.315592   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.315610   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:14.315618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:14.315679   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:14.352454   80762 cri.go:89] found id: ""
	I0612 21:40:14.352483   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.352496   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:14.352504   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:14.352554   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:14.387845   80762 cri.go:89] found id: ""
	I0612 21:40:14.387872   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.387880   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:14.387886   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:14.387935   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:14.423220   80762 cri.go:89] found id: ""
	I0612 21:40:14.423245   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.423254   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:14.423259   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:14.423322   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:14.457744   80762 cri.go:89] found id: ""
	I0612 21:40:14.457772   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.457784   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:14.457791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:14.457849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:14.493580   80762 cri.go:89] found id: ""
	I0612 21:40:14.493611   80762 logs.go:276] 0 containers: []
	W0612 21:40:14.493622   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:14.493633   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:14.493669   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:14.566867   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:14.566894   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:14.566913   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:14.645916   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:14.645959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:14.690232   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:14.690262   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:14.741532   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:14.741576   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.257886   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:17.271841   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:17.271910   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:17.309628   80762 cri.go:89] found id: ""
	I0612 21:40:17.309654   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.309667   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:17.309675   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:17.309746   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:17.346671   80762 cri.go:89] found id: ""
	I0612 21:40:17.346752   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.346769   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:17.346777   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:17.346842   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:17.381145   80762 cri.go:89] found id: ""
	I0612 21:40:17.381169   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.381177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:17.381184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:17.381241   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:17.417159   80762 cri.go:89] found id: ""
	I0612 21:40:17.417179   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.417187   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:17.417194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:17.417254   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:17.453189   80762 cri.go:89] found id: ""
	I0612 21:40:17.453213   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.453220   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:17.453226   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:17.453284   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:17.510988   80762 cri.go:89] found id: ""
	I0612 21:40:17.511012   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.511019   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:17.511026   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:17.511083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:17.548141   80762 cri.go:89] found id: ""
	I0612 21:40:17.548166   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.548176   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:17.548182   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:17.548243   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:17.584591   80762 cri.go:89] found id: ""
	I0612 21:40:17.584619   80762 logs.go:276] 0 containers: []
	W0612 21:40:17.584627   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:17.584637   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:17.584647   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:17.628627   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:17.628662   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:17.682792   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:17.682823   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:17.697921   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:17.697959   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:17.770591   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:17.770617   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:17.770633   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:15.665831   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.165059   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:16.014130   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:18.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:17.817443   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.316576   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.350181   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:20.363671   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:20.363743   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:20.399858   80762 cri.go:89] found id: ""
	I0612 21:40:20.399889   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.399896   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:20.399903   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:20.399963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:20.437715   80762 cri.go:89] found id: ""
	I0612 21:40:20.437755   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.437766   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:20.437776   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:20.437843   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:20.472525   80762 cri.go:89] found id: ""
	I0612 21:40:20.472558   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.472573   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:20.472582   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:20.472642   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:20.507923   80762 cri.go:89] found id: ""
	I0612 21:40:20.507948   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.507959   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:20.507966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:20.508029   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:20.545471   80762 cri.go:89] found id: ""
	I0612 21:40:20.545502   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.545512   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:20.545519   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:20.545586   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:20.583793   80762 cri.go:89] found id: ""
	I0612 21:40:20.583829   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.583839   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:20.583846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:20.583912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:20.624399   80762 cri.go:89] found id: ""
	I0612 21:40:20.624438   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.624449   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:20.624467   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:20.624530   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:20.665158   80762 cri.go:89] found id: ""
	I0612 21:40:20.665184   80762 logs.go:276] 0 containers: []
	W0612 21:40:20.665194   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:20.665203   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:20.665217   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:20.743062   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:20.743101   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:20.792573   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:20.792613   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:20.847998   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:20.848033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:20.863447   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:20.863497   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:20.938020   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:20.165455   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.665110   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.665262   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:20.513556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.014750   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:22.316950   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:24.815377   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:26.817066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:23.438289   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:23.453792   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:23.453855   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:23.494044   80762 cri.go:89] found id: ""
	I0612 21:40:23.494070   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.494077   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:23.494083   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:23.494144   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:23.533278   80762 cri.go:89] found id: ""
	I0612 21:40:23.533305   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.533313   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:23.533319   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:23.533380   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:23.568504   80762 cri.go:89] found id: ""
	I0612 21:40:23.568538   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.568549   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:23.568556   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:23.568619   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:23.610596   80762 cri.go:89] found id: ""
	I0612 21:40:23.610624   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.610633   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:23.610638   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:23.610690   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:23.651856   80762 cri.go:89] found id: ""
	I0612 21:40:23.651886   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.651896   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:23.651903   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:23.651978   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:23.690989   80762 cri.go:89] found id: ""
	I0612 21:40:23.691020   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.691030   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:23.691036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:23.691089   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:23.730417   80762 cri.go:89] found id: ""
	I0612 21:40:23.730454   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.730467   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:23.730476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:23.730538   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:23.773887   80762 cri.go:89] found id: ""
	I0612 21:40:23.773913   80762 logs.go:276] 0 containers: []
	W0612 21:40:23.773921   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:23.773932   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:23.773947   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:23.825771   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:23.825805   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:23.840136   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:23.840163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:23.933645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:23.933670   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:23.933686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:24.020205   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:24.020243   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.566746   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:26.579557   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:26.579612   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:26.614721   80762 cri.go:89] found id: ""
	I0612 21:40:26.614749   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.614757   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:26.614763   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:26.614815   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:26.651398   80762 cri.go:89] found id: ""
	I0612 21:40:26.651427   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.651437   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:26.651445   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:26.651506   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:26.688217   80762 cri.go:89] found id: ""
	I0612 21:40:26.688249   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.688261   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:26.688268   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:26.688333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:26.721316   80762 cri.go:89] found id: ""
	I0612 21:40:26.721346   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.721357   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:26.721364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:26.721424   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:26.758842   80762 cri.go:89] found id: ""
	I0612 21:40:26.758868   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.758878   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:26.758885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:26.758957   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:26.795696   80762 cri.go:89] found id: ""
	I0612 21:40:26.795725   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.795733   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:26.795738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:26.795788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:26.834903   80762 cri.go:89] found id: ""
	I0612 21:40:26.834932   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.834941   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:26.834947   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:26.835020   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:26.872751   80762 cri.go:89] found id: ""
	I0612 21:40:26.872788   80762 logs.go:276] 0 containers: []
	W0612 21:40:26.872796   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:26.872805   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:26.872817   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:26.952401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:26.952440   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:26.990548   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:26.990583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:27.042973   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:27.043029   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:27.058348   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:27.058379   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:27.133047   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:26.666430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.165063   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:25.513982   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:28.012556   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:30.017664   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.315668   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:31.316817   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:29.634105   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:29.654113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:29.654171   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:29.700138   80762 cri.go:89] found id: ""
	I0612 21:40:29.700169   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.700179   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:29.700188   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:29.700260   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:29.751599   80762 cri.go:89] found id: ""
	I0612 21:40:29.751628   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.751638   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:29.751646   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:29.751699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:29.801971   80762 cri.go:89] found id: ""
	I0612 21:40:29.801995   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.802003   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:29.802008   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:29.802059   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:29.839381   80762 cri.go:89] found id: ""
	I0612 21:40:29.839407   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.839418   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:29.839426   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:29.839484   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:29.876634   80762 cri.go:89] found id: ""
	I0612 21:40:29.876661   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.876668   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:29.876675   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:29.876721   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:29.909673   80762 cri.go:89] found id: ""
	I0612 21:40:29.909707   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.909718   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:29.909726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:29.909791   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:29.947984   80762 cri.go:89] found id: ""
	I0612 21:40:29.948019   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.948029   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:29.948037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:29.948099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:29.988611   80762 cri.go:89] found id: ""
	I0612 21:40:29.988639   80762 logs.go:276] 0 containers: []
	W0612 21:40:29.988650   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:29.988660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:29.988675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:30.073180   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:30.073216   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:30.114703   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:30.114732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:30.173242   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:30.173278   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:30.189081   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:30.189112   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:30.263564   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:32.763967   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:32.776738   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:32.776808   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:32.813088   80762 cri.go:89] found id: ""
	I0612 21:40:32.813115   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.813125   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:32.813132   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:32.813195   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:32.850960   80762 cri.go:89] found id: ""
	I0612 21:40:32.850987   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.850996   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:32.851004   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:32.851065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:31.166578   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.669302   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.512480   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:34.512817   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:33.815867   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:35.817105   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:32.887229   80762 cri.go:89] found id: ""
	I0612 21:40:32.887259   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.887270   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:32.887277   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:32.887346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:32.923123   80762 cri.go:89] found id: ""
	I0612 21:40:32.923148   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.923158   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:32.923164   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:32.923242   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:32.962603   80762 cri.go:89] found id: ""
	I0612 21:40:32.962628   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.962638   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:32.962644   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:32.962695   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:32.998971   80762 cri.go:89] found id: ""
	I0612 21:40:32.999025   80762 logs.go:276] 0 containers: []
	W0612 21:40:32.999037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:32.999046   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:32.999120   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:33.037640   80762 cri.go:89] found id: ""
	I0612 21:40:33.037670   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.037680   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:33.037686   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:33.037748   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:33.073758   80762 cri.go:89] found id: ""
	I0612 21:40:33.073787   80762 logs.go:276] 0 containers: []
	W0612 21:40:33.073794   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:33.073804   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:33.073815   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:33.124478   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:33.124512   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:33.139010   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:33.139036   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:33.207693   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:33.207716   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:33.207732   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:33.287710   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:33.287746   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:35.831654   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:35.845783   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:35.845845   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:35.882097   80762 cri.go:89] found id: ""
	I0612 21:40:35.882129   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.882141   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:35.882149   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:35.882205   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:35.920931   80762 cri.go:89] found id: ""
	I0612 21:40:35.920972   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.920980   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:35.920985   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:35.921061   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:35.958689   80762 cri.go:89] found id: ""
	I0612 21:40:35.958712   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.958721   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:35.958726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:35.958774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:35.994973   80762 cri.go:89] found id: ""
	I0612 21:40:35.995028   80762 logs.go:276] 0 containers: []
	W0612 21:40:35.995040   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:35.995048   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:35.995114   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:36.035679   80762 cri.go:89] found id: ""
	I0612 21:40:36.035707   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.035715   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:36.035721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:36.035768   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:36.071498   80762 cri.go:89] found id: ""
	I0612 21:40:36.071525   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.071534   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:36.071544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:36.071594   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:36.107367   80762 cri.go:89] found id: ""
	I0612 21:40:36.107397   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.107406   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:36.107413   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:36.107466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:36.148668   80762 cri.go:89] found id: ""
	I0612 21:40:36.148699   80762 logs.go:276] 0 containers: []
	W0612 21:40:36.148710   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:36.148721   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:36.148736   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:36.207719   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:36.207765   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:36.223129   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:36.223158   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:36.290786   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:36.290809   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:36.290822   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:36.375361   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:36.375398   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:36.165430   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.165989   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:37.015936   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:39.513497   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.318886   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:40.815802   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:38.921100   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:38.935420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:38.935491   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:38.970519   80762 cri.go:89] found id: ""
	I0612 21:40:38.970548   80762 logs.go:276] 0 containers: []
	W0612 21:40:38.970559   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:38.970567   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:38.970639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:39.005866   80762 cri.go:89] found id: ""
	I0612 21:40:39.005888   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.005896   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:39.005902   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:39.005954   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:39.043619   80762 cri.go:89] found id: ""
	I0612 21:40:39.043647   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.043655   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:39.043661   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:39.043709   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:39.081311   80762 cri.go:89] found id: ""
	I0612 21:40:39.081336   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.081344   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:39.081350   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:39.081410   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:39.117326   80762 cri.go:89] found id: ""
	I0612 21:40:39.117358   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.117367   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:39.117372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:39.117423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:39.151785   80762 cri.go:89] found id: ""
	I0612 21:40:39.151819   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.151828   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:39.151835   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:39.151899   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:39.187031   80762 cri.go:89] found id: ""
	I0612 21:40:39.187057   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.187065   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:39.187071   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:39.187119   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:39.222186   80762 cri.go:89] found id: ""
	I0612 21:40:39.222212   80762 logs.go:276] 0 containers: []
	W0612 21:40:39.222223   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:39.222233   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:39.222245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:39.276126   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:39.276164   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:39.291631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:39.291658   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:39.365615   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:39.365641   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:39.365659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:39.442548   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:39.442600   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:41.980840   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:41.996629   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:41.996686   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:42.034158   80762 cri.go:89] found id: ""
	I0612 21:40:42.034186   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.034195   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:42.034202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:42.034274   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:42.070981   80762 cri.go:89] found id: ""
	I0612 21:40:42.071011   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.071021   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:42.071028   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:42.071093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:42.108282   80762 cri.go:89] found id: ""
	I0612 21:40:42.108309   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.108316   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:42.108322   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:42.108369   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:42.146394   80762 cri.go:89] found id: ""
	I0612 21:40:42.146423   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.146434   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:42.146454   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:42.146539   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:42.183577   80762 cri.go:89] found id: ""
	I0612 21:40:42.183601   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.183608   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:42.183614   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:42.183662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:42.222069   80762 cri.go:89] found id: ""
	I0612 21:40:42.222100   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.222109   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:42.222115   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:42.222168   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:42.259128   80762 cri.go:89] found id: ""
	I0612 21:40:42.259155   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.259164   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:42.259192   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:42.259282   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:42.296321   80762 cri.go:89] found id: ""
	I0612 21:40:42.296354   80762 logs.go:276] 0 containers: []
	W0612 21:40:42.296368   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:42.296380   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:42.296400   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:42.311098   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:42.311137   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:42.386116   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:42.386144   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:42.386163   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:42.467016   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:42.467054   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:42.509143   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:42.509180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:40.166288   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.664817   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.665596   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.017043   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:44.513368   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:42.816702   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.316890   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:45.062872   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:45.076570   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:45.076658   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:45.114362   80762 cri.go:89] found id: ""
	I0612 21:40:45.114394   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.114404   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:45.114412   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:45.114478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:45.151577   80762 cri.go:89] found id: ""
	I0612 21:40:45.151609   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.151620   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:45.151627   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:45.151689   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:45.188753   80762 cri.go:89] found id: ""
	I0612 21:40:45.188785   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.188795   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:45.188802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:45.188861   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:45.224775   80762 cri.go:89] found id: ""
	I0612 21:40:45.224801   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.224808   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:45.224814   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:45.224873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:45.260440   80762 cri.go:89] found id: ""
	I0612 21:40:45.260472   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.260483   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:45.260490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:45.260547   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:45.297662   80762 cri.go:89] found id: ""
	I0612 21:40:45.297697   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.297709   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:45.297716   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:45.297774   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:45.335637   80762 cri.go:89] found id: ""
	I0612 21:40:45.335669   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.335682   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:45.335690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:45.335753   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:45.371523   80762 cri.go:89] found id: ""
	I0612 21:40:45.371580   80762 logs.go:276] 0 containers: []
	W0612 21:40:45.371590   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:45.371599   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:45.371610   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:45.424029   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:45.424065   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:45.440339   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:45.440378   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:45.509504   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:45.509526   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:45.509541   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:45.591857   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:45.591893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:47.166437   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.665544   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.016561   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.511894   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:47.320090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:49.816816   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:48.135912   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:48.151271   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:48.151331   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:48.192740   80762 cri.go:89] found id: ""
	I0612 21:40:48.192775   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.192788   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:48.192798   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:48.192875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:48.230440   80762 cri.go:89] found id: ""
	I0612 21:40:48.230469   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.230479   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:48.230487   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:48.230549   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:48.270892   80762 cri.go:89] found id: ""
	I0612 21:40:48.270922   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.270933   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:48.270941   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:48.270996   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:48.308555   80762 cri.go:89] found id: ""
	I0612 21:40:48.308580   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.308588   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:48.308594   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:48.308640   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:48.342705   80762 cri.go:89] found id: ""
	I0612 21:40:48.342727   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.342735   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:48.342741   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:48.342788   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:48.377418   80762 cri.go:89] found id: ""
	I0612 21:40:48.377450   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.377461   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:48.377468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:48.377535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:48.413092   80762 cri.go:89] found id: ""
	I0612 21:40:48.413126   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.413141   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:48.413149   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:48.413215   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:48.447673   80762 cri.go:89] found id: ""
	I0612 21:40:48.447699   80762 logs.go:276] 0 containers: []
	W0612 21:40:48.447708   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:48.447716   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:48.447728   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:48.488508   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:48.488542   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:48.540573   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:48.540608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:48.554735   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:48.554762   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:48.632074   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:48.632098   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:48.632117   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.212336   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:51.227428   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:51.227493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:51.268124   80762 cri.go:89] found id: ""
	I0612 21:40:51.268157   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.268167   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:51.268172   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:51.268220   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:51.305751   80762 cri.go:89] found id: ""
	I0612 21:40:51.305777   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.305785   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:51.305793   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:51.305849   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:51.347292   80762 cri.go:89] found id: ""
	I0612 21:40:51.347318   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.347325   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:51.347332   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:51.347394   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:51.387476   80762 cri.go:89] found id: ""
	I0612 21:40:51.387501   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.387509   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:51.387515   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:51.387573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:51.431992   80762 cri.go:89] found id: ""
	I0612 21:40:51.432019   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.432029   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:51.432036   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:51.432096   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:51.477204   80762 cri.go:89] found id: ""
	I0612 21:40:51.477235   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.477246   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:51.477254   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:51.477346   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:51.518449   80762 cri.go:89] found id: ""
	I0612 21:40:51.518477   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.518488   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:51.518502   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:51.518562   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:51.554991   80762 cri.go:89] found id: ""
	I0612 21:40:51.555015   80762 logs.go:276] 0 containers: []
	W0612 21:40:51.555024   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:51.555033   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:51.555046   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:51.606732   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:51.606769   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:51.620512   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:51.620538   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:51.697029   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:51.697058   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:51.697074   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:51.775401   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:51.775437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:51.666561   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.166247   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:51.512909   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.012887   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:52.315904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.316764   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.816819   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:54.318059   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:54.331420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:54.331509   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:54.367886   80762 cri.go:89] found id: ""
	I0612 21:40:54.367926   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.367948   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:54.367959   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:54.368047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:54.403998   80762 cri.go:89] found id: ""
	I0612 21:40:54.404023   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.404034   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:54.404041   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:54.404108   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:54.441449   80762 cri.go:89] found id: ""
	I0612 21:40:54.441480   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.441491   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:54.441498   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:54.441557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:54.476459   80762 cri.go:89] found id: ""
	I0612 21:40:54.476490   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.476500   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:54.476508   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:54.476573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:54.515337   80762 cri.go:89] found id: ""
	I0612 21:40:54.515360   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.515368   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:54.515374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:54.515423   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:54.551447   80762 cri.go:89] found id: ""
	I0612 21:40:54.551468   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.551475   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:54.551481   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:54.551528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:54.587082   80762 cri.go:89] found id: ""
	I0612 21:40:54.587114   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.587125   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:54.587145   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:54.587225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:54.624211   80762 cri.go:89] found id: ""
	I0612 21:40:54.624235   80762 logs.go:276] 0 containers: []
	W0612 21:40:54.624257   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:54.624268   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:54.624282   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:54.677816   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:54.677848   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:54.693725   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:54.693749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:54.772229   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:54.772255   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:54.772273   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:54.852543   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:54.852578   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:40:57.397722   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:40:57.411082   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:40:57.411145   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:40:57.449633   80762 cri.go:89] found id: ""
	I0612 21:40:57.449662   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.449673   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:40:57.449680   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:40:57.449745   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:40:57.489855   80762 cri.go:89] found id: ""
	I0612 21:40:57.489880   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.489889   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:40:57.489894   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:40:57.489952   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:40:57.528986   80762 cri.go:89] found id: ""
	I0612 21:40:57.529006   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.529014   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:40:57.529019   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:40:57.529081   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:40:57.566701   80762 cri.go:89] found id: ""
	I0612 21:40:57.566730   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.566739   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:40:57.566746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:40:57.566800   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:40:57.601114   80762 cri.go:89] found id: ""
	I0612 21:40:57.601137   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.601145   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:40:57.601151   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:40:57.601212   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:40:57.636120   80762 cri.go:89] found id: ""
	I0612 21:40:57.636145   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.636155   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:40:57.636163   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:40:57.636225   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:40:57.676912   80762 cri.go:89] found id: ""
	I0612 21:40:57.676953   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.676960   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:40:57.676966   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:40:57.677039   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:40:57.714671   80762 cri.go:89] found id: ""
	I0612 21:40:57.714691   80762 logs.go:276] 0 containers: []
	W0612 21:40:57.714699   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:40:57.714707   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:40:57.714720   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:40:57.770550   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:40:57.770583   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:40:57.785062   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:40:57.785093   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:40:57.853448   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:40:57.853468   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:40:57.853480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:40:56.167768   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.665108   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:56.014274   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.014535   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:58.816961   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.817450   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:40:57.939957   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:40:57.939999   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.493469   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:00.509746   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:00.509819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:00.546582   80762 cri.go:89] found id: ""
	I0612 21:41:00.546610   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.546620   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:00.546629   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:00.546683   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:00.584229   80762 cri.go:89] found id: ""
	I0612 21:41:00.584256   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.584264   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:00.584269   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:00.584337   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:00.618679   80762 cri.go:89] found id: ""
	I0612 21:41:00.618704   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.618712   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:00.618719   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:00.618778   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:00.656336   80762 cri.go:89] found id: ""
	I0612 21:41:00.656364   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.656375   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:00.656384   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:00.656457   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:00.694147   80762 cri.go:89] found id: ""
	I0612 21:41:00.694173   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.694182   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:00.694187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:00.694236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:00.733964   80762 cri.go:89] found id: ""
	I0612 21:41:00.733994   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.734006   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:00.734014   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:00.734076   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:00.771245   80762 cri.go:89] found id: ""
	I0612 21:41:00.771274   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.771287   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:00.771293   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:00.771357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:00.809118   80762 cri.go:89] found id: ""
	I0612 21:41:00.809150   80762 logs.go:276] 0 containers: []
	W0612 21:41:00.809158   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:00.809168   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:00.809188   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:00.863479   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:00.863514   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:00.878749   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:00.878783   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:00.955800   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:00.955825   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:00.955844   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:01.037587   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:01.037618   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:00.666373   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.165203   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:00.513805   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.017922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.317115   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.817907   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:03.583063   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:03.597656   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:03.597732   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:03.633233   80762 cri.go:89] found id: ""
	I0612 21:41:03.633263   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.633283   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:03.633291   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:03.633357   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:03.679900   80762 cri.go:89] found id: ""
	I0612 21:41:03.679930   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.679941   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:03.679948   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:03.680018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:03.718766   80762 cri.go:89] found id: ""
	I0612 21:41:03.718792   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.718800   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:03.718811   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:03.718868   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:03.759404   80762 cri.go:89] found id: ""
	I0612 21:41:03.759429   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.759437   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:03.759443   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:03.759496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:03.794313   80762 cri.go:89] found id: ""
	I0612 21:41:03.794348   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.794357   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:03.794364   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:03.794430   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:03.832525   80762 cri.go:89] found id: ""
	I0612 21:41:03.832546   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.832554   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:03.832559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:03.832607   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:03.872841   80762 cri.go:89] found id: ""
	I0612 21:41:03.872868   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.872878   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:03.872885   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:03.872945   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:03.912736   80762 cri.go:89] found id: ""
	I0612 21:41:03.912760   80762 logs.go:276] 0 containers: []
	W0612 21:41:03.912770   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:03.912781   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:03.912794   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:03.986645   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:03.986672   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:03.986688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:04.066766   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:04.066799   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:04.108219   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:04.108250   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:04.168866   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:04.168911   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:06.684232   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:06.698359   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:06.698443   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:06.735324   80762 cri.go:89] found id: ""
	I0612 21:41:06.735350   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.735359   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:06.735364   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:06.735418   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:06.771763   80762 cri.go:89] found id: ""
	I0612 21:41:06.771786   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.771794   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:06.771799   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:06.771850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:06.808151   80762 cri.go:89] found id: ""
	I0612 21:41:06.808179   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.808188   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:06.808193   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:06.808263   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:06.846099   80762 cri.go:89] found id: ""
	I0612 21:41:06.846121   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.846129   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:06.846134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:06.846182   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:06.883559   80762 cri.go:89] found id: ""
	I0612 21:41:06.883584   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.883591   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:06.883597   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:06.883645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:06.920799   80762 cri.go:89] found id: ""
	I0612 21:41:06.920830   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.920841   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:06.920849   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:06.920914   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:06.964441   80762 cri.go:89] found id: ""
	I0612 21:41:06.964472   80762 logs.go:276] 0 containers: []
	W0612 21:41:06.964482   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:06.964490   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:06.964561   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:07.000866   80762 cri.go:89] found id: ""
	I0612 21:41:07.000901   80762 logs.go:276] 0 containers: []
	W0612 21:41:07.000912   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:07.000924   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:07.000993   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:07.017074   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:07.017121   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:07.093873   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:07.093901   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:07.093925   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:07.171258   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:07.171293   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:07.212588   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:07.212624   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:05.166177   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.665354   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.665558   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:05.512109   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:07.512615   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.513483   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:08.316327   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:10.316456   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:09.767332   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:09.781184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:09.781249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:09.818966   80762 cri.go:89] found id: ""
	I0612 21:41:09.818999   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.819008   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:09.819014   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:09.819064   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:09.854714   80762 cri.go:89] found id: ""
	I0612 21:41:09.854742   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.854760   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:09.854772   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:09.854823   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:09.891229   80762 cri.go:89] found id: ""
	I0612 21:41:09.891257   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.891268   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:09.891274   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:09.891335   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:09.928569   80762 cri.go:89] found id: ""
	I0612 21:41:09.928598   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.928610   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:09.928617   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:09.928680   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:09.963681   80762 cri.go:89] found id: ""
	I0612 21:41:09.963714   80762 logs.go:276] 0 containers: []
	W0612 21:41:09.963725   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:09.963733   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:09.963819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:10.002340   80762 cri.go:89] found id: ""
	I0612 21:41:10.002368   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.002380   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:10.002388   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:10.002454   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:10.041935   80762 cri.go:89] found id: ""
	I0612 21:41:10.041961   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.041972   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:10.041979   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:10.042047   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:10.080541   80762 cri.go:89] found id: ""
	I0612 21:41:10.080571   80762 logs.go:276] 0 containers: []
	W0612 21:41:10.080584   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:10.080598   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:10.080614   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:10.140904   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:10.140944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:10.176646   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:10.176688   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:10.272147   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:10.272169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:10.272183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:10.352649   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:10.352686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:12.166618   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.665896   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.013177   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.013716   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.317177   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:14.317391   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.815940   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:12.896274   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:12.911147   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:12.911231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:12.947628   80762 cri.go:89] found id: ""
	I0612 21:41:12.947651   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.947660   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:12.947665   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:12.947726   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:12.982813   80762 cri.go:89] found id: ""
	I0612 21:41:12.982837   80762 logs.go:276] 0 containers: []
	W0612 21:41:12.982845   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:12.982851   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:12.982898   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:13.021360   80762 cri.go:89] found id: ""
	I0612 21:41:13.021403   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.021412   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:13.021417   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:13.021468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:13.063534   80762 cri.go:89] found id: ""
	I0612 21:41:13.063566   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.063576   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:13.063585   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:13.063666   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:13.098767   80762 cri.go:89] found id: ""
	I0612 21:41:13.098796   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.098807   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:13.098816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:13.098878   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:13.140764   80762 cri.go:89] found id: ""
	I0612 21:41:13.140797   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.140809   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:13.140816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:13.140882   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:13.180356   80762 cri.go:89] found id: ""
	I0612 21:41:13.180400   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.180413   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:13.180420   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:13.180482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:13.215198   80762 cri.go:89] found id: ""
	I0612 21:41:13.215227   80762 logs.go:276] 0 containers: []
	W0612 21:41:13.215238   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:13.215249   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:13.215265   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:13.273143   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:13.273182   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:13.287861   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:13.287893   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:13.366052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:13.366073   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:13.366099   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:13.450980   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:13.451015   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:15.991386   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:16.005618   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:16.005699   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:16.047253   80762 cri.go:89] found id: ""
	I0612 21:41:16.047281   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.047289   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:16.047295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:16.047356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:16.082860   80762 cri.go:89] found id: ""
	I0612 21:41:16.082886   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.082894   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:16.082899   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:16.082948   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:16.123127   80762 cri.go:89] found id: ""
	I0612 21:41:16.123152   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.123164   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:16.123187   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:16.123247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:16.167155   80762 cri.go:89] found id: ""
	I0612 21:41:16.167189   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.167199   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:16.167207   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:16.167276   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:16.204036   80762 cri.go:89] found id: ""
	I0612 21:41:16.204061   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.204071   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:16.204079   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:16.204140   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:16.246672   80762 cri.go:89] found id: ""
	I0612 21:41:16.246701   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.246712   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:16.246721   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:16.246785   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:16.286820   80762 cri.go:89] found id: ""
	I0612 21:41:16.286849   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.286857   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:16.286864   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:16.286919   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:16.326622   80762 cri.go:89] found id: ""
	I0612 21:41:16.326649   80762 logs.go:276] 0 containers: []
	W0612 21:41:16.326660   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:16.326667   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:16.326678   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:16.407492   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:16.407525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:16.448207   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:16.448236   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:16.501675   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:16.501714   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:16.518173   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:16.518202   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:16.592582   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:17.166163   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.167204   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:16.514405   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.016197   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:18.816596   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:20.817504   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:19.093054   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:19.107926   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:19.108002   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:19.149386   80762 cri.go:89] found id: ""
	I0612 21:41:19.149411   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.149421   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:19.149429   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:19.149493   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:19.188092   80762 cri.go:89] found id: ""
	I0612 21:41:19.188120   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.188131   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:19.188139   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:19.188201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:19.227203   80762 cri.go:89] found id: ""
	I0612 21:41:19.227229   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.227235   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:19.227242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:19.227301   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:19.269187   80762 cri.go:89] found id: ""
	I0612 21:41:19.269217   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.269225   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:19.269232   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:19.269294   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:19.305394   80762 cri.go:89] found id: ""
	I0612 21:41:19.305418   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.305425   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:19.305431   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:19.305480   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:19.347794   80762 cri.go:89] found id: ""
	I0612 21:41:19.347825   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.347837   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:19.347846   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:19.347907   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:19.384314   80762 cri.go:89] found id: ""
	I0612 21:41:19.384346   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.384364   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:19.384372   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:19.384428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:19.421782   80762 cri.go:89] found id: ""
	I0612 21:41:19.421811   80762 logs.go:276] 0 containers: []
	W0612 21:41:19.421822   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:19.421834   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:19.421849   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:19.475969   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:19.476000   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:19.490683   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:19.490710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:19.574492   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:19.574513   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:19.574525   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:19.662288   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:19.662324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:22.205404   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:22.220217   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:22.220297   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:22.256870   80762 cri.go:89] found id: ""
	I0612 21:41:22.256901   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.256913   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:22.256921   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:22.256984   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:22.290380   80762 cri.go:89] found id: ""
	I0612 21:41:22.290413   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.290425   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:22.290433   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:22.290497   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:22.324981   80762 cri.go:89] found id: ""
	I0612 21:41:22.325010   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.325019   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:22.325024   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:22.325093   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:22.362900   80762 cri.go:89] found id: ""
	I0612 21:41:22.362926   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.362938   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:22.362946   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:22.363008   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:22.399004   80762 cri.go:89] found id: ""
	I0612 21:41:22.399037   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.399048   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:22.399056   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:22.399116   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:22.434306   80762 cri.go:89] found id: ""
	I0612 21:41:22.434341   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.434355   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:22.434365   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:22.434422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:22.479085   80762 cri.go:89] found id: ""
	I0612 21:41:22.479116   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.479129   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:22.479142   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:22.479228   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:22.516730   80762 cri.go:89] found id: ""
	I0612 21:41:22.516755   80762 logs.go:276] 0 containers: []
	W0612 21:41:22.516761   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:22.516769   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:22.516780   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:22.570921   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:22.570957   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:22.585409   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:22.585437   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:22.667314   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:22.667342   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:22.667360   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:22.743865   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:22.743901   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:21.170060   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.666364   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:21.021658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.512541   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:23.316232   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.816641   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.282306   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:25.297334   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:25.297407   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:25.336610   80762 cri.go:89] found id: ""
	I0612 21:41:25.336641   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.336654   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:25.336662   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:25.336729   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:25.373307   80762 cri.go:89] found id: ""
	I0612 21:41:25.373338   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.373350   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:25.373358   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:25.373425   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:25.413141   80762 cri.go:89] found id: ""
	I0612 21:41:25.413169   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.413177   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:25.413183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:25.413233   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:25.450810   80762 cri.go:89] found id: ""
	I0612 21:41:25.450842   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.450853   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:25.450862   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:25.450924   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:25.487017   80762 cri.go:89] found id: ""
	I0612 21:41:25.487049   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.487255   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:25.487269   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:25.487328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:25.524335   80762 cri.go:89] found id: ""
	I0612 21:41:25.524361   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.524371   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:25.524377   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:25.524428   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:25.560394   80762 cri.go:89] found id: ""
	I0612 21:41:25.560421   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.560429   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:25.560435   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:25.560482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:25.599334   80762 cri.go:89] found id: ""
	I0612 21:41:25.599362   80762 logs.go:276] 0 containers: []
	W0612 21:41:25.599372   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:25.599384   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:25.599399   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:25.680153   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:25.680195   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:25.726336   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:25.726377   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:25.777064   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:25.777098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:25.791978   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:25.792007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:25.868860   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:25.667028   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.164920   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:25.514249   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.012042   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.013658   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.316543   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:30.816789   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:28.369099   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:28.382729   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:28.382786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:28.423835   80762 cri.go:89] found id: ""
	I0612 21:41:28.423865   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.423875   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:28.423889   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:28.423953   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:28.463098   80762 cri.go:89] found id: ""
	I0612 21:41:28.463127   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.463137   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:28.463144   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:28.463223   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:28.499678   80762 cri.go:89] found id: ""
	I0612 21:41:28.499707   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.499718   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:28.499726   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:28.499786   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:28.536057   80762 cri.go:89] found id: ""
	I0612 21:41:28.536089   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.536101   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:28.536108   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:28.536180   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:28.571052   80762 cri.go:89] found id: ""
	I0612 21:41:28.571080   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.571090   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:28.571098   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:28.571162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:28.607320   80762 cri.go:89] found id: ""
	I0612 21:41:28.607348   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.607360   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:28.607368   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:28.607427   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:28.643000   80762 cri.go:89] found id: ""
	I0612 21:41:28.643029   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.643037   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:28.643042   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:28.643113   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:28.684134   80762 cri.go:89] found id: ""
	I0612 21:41:28.684164   80762 logs.go:276] 0 containers: []
	W0612 21:41:28.684175   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:28.684186   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:28.684201   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:28.737059   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:28.737092   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:28.753290   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:28.753320   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:28.826964   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:28.826990   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:28.827009   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:28.908874   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:28.908919   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:31.450362   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:31.465831   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:31.465912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:31.507441   80762 cri.go:89] found id: ""
	I0612 21:41:31.507465   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.507474   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:31.507482   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:31.507546   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:31.541403   80762 cri.go:89] found id: ""
	I0612 21:41:31.541437   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.541450   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:31.541458   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:31.541524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:31.576367   80762 cri.go:89] found id: ""
	I0612 21:41:31.576393   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.576405   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:31.576412   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:31.576475   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:31.615053   80762 cri.go:89] found id: ""
	I0612 21:41:31.615081   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.615091   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:31.615099   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:31.615159   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:31.650460   80762 cri.go:89] found id: ""
	I0612 21:41:31.650495   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.650504   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:31.650511   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:31.650580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:31.690764   80762 cri.go:89] found id: ""
	I0612 21:41:31.690792   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.690803   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:31.690810   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:31.690870   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:31.729785   80762 cri.go:89] found id: ""
	I0612 21:41:31.729809   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.729817   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:31.729822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:31.729881   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:31.772978   80762 cri.go:89] found id: ""
	I0612 21:41:31.773005   80762 logs.go:276] 0 containers: []
	W0612 21:41:31.773013   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:31.773023   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:31.773038   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:31.830451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:31.830484   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:31.846821   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:31.846850   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:31.927289   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:31.927328   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:31.927358   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:32.004814   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:32.004852   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:30.165423   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.165695   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.664959   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:32.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.515104   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:33.316674   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:35.816686   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:34.550931   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:34.567559   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:34.567618   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:34.602234   80762 cri.go:89] found id: ""
	I0612 21:41:34.602260   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.602267   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:34.602273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:34.602318   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:34.639575   80762 cri.go:89] found id: ""
	I0612 21:41:34.639598   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.639605   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:34.639610   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:34.639656   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:34.681325   80762 cri.go:89] found id: ""
	I0612 21:41:34.681360   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.681368   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:34.681374   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:34.681478   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:34.721405   80762 cri.go:89] found id: ""
	I0612 21:41:34.721432   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.721444   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:34.721451   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:34.721517   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:34.764344   80762 cri.go:89] found id: ""
	I0612 21:41:34.764375   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.764386   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:34.764394   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:34.764459   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:34.802083   80762 cri.go:89] found id: ""
	I0612 21:41:34.802107   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.802115   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:34.802121   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:34.802181   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:34.843418   80762 cri.go:89] found id: ""
	I0612 21:41:34.843441   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.843450   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:34.843455   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:34.843501   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:34.877803   80762 cri.go:89] found id: ""
	I0612 21:41:34.877832   80762 logs.go:276] 0 containers: []
	W0612 21:41:34.877842   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:34.877852   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:34.877867   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:34.930515   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:34.930545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:34.943705   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:34.943729   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:35.024912   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:35.024931   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:35.024941   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:35.109129   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:35.109165   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:37.651788   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:37.667901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:37.667964   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:37.709599   80762 cri.go:89] found id: ""
	I0612 21:41:37.709627   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.709637   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:37.709645   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:37.709700   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:37.747150   80762 cri.go:89] found id: ""
	I0612 21:41:37.747191   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.747204   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:37.747212   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:37.747273   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:37.785528   80762 cri.go:89] found id: ""
	I0612 21:41:37.785552   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.785560   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:37.785567   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:37.785614   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:37.822363   80762 cri.go:89] found id: ""
	I0612 21:41:37.822390   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.822400   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:37.822408   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:37.822468   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:36.666054   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.165222   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.012397   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:39.012503   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:38.317132   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:40.821114   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:37.858285   80762 cri.go:89] found id: ""
	I0612 21:41:37.858395   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.858409   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:37.858416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:37.858466   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:37.897500   80762 cri.go:89] found id: ""
	I0612 21:41:37.897542   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.897556   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:37.897574   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:37.897635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:37.937878   80762 cri.go:89] found id: ""
	I0612 21:41:37.937905   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.937921   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:37.937927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:37.937985   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:37.978282   80762 cri.go:89] found id: ""
	I0612 21:41:37.978310   80762 logs.go:276] 0 containers: []
	W0612 21:41:37.978319   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:37.978327   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:37.978341   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:38.055864   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:38.055890   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:38.055903   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:38.135883   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:38.135918   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:38.178641   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:38.178668   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:38.236635   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:38.236686   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:40.759426   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:40.773526   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:40.773598   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:40.819130   80762 cri.go:89] found id: ""
	I0612 21:41:40.819161   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.819190   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:40.819202   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:40.819264   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:40.856176   80762 cri.go:89] found id: ""
	I0612 21:41:40.856204   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.856216   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:40.856224   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:40.856287   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:40.896709   80762 cri.go:89] found id: ""
	I0612 21:41:40.896739   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.896750   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:40.896759   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:40.896820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:40.936431   80762 cri.go:89] found id: ""
	I0612 21:41:40.936457   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.936465   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:40.936471   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:40.936528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:40.979773   80762 cri.go:89] found id: ""
	I0612 21:41:40.979809   80762 logs.go:276] 0 containers: []
	W0612 21:41:40.979820   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:40.979828   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:40.979892   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:41.023885   80762 cri.go:89] found id: ""
	I0612 21:41:41.023910   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.023919   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:41.023925   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:41.024004   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:41.070370   80762 cri.go:89] found id: ""
	I0612 21:41:41.070396   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.070405   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:41.070411   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:41.070467   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:41.115282   80762 cri.go:89] found id: ""
	I0612 21:41:41.115311   80762 logs.go:276] 0 containers: []
	W0612 21:41:41.115321   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:41.115332   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:41.115346   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:41.128680   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:41.128710   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:41.206100   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:41.206125   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:41.206140   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:41.283499   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:41.283536   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:41.323275   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:41.323307   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:41.166258   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.666600   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:41.013379   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.512866   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.316659   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.816066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:43.875750   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:43.890156   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:43.890216   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:43.935105   80762 cri.go:89] found id: ""
	I0612 21:41:43.935135   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.935147   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:43.935155   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:43.935236   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:43.980929   80762 cri.go:89] found id: ""
	I0612 21:41:43.980958   80762 logs.go:276] 0 containers: []
	W0612 21:41:43.980967   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:43.980973   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:43.981051   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:44.029387   80762 cri.go:89] found id: ""
	I0612 21:41:44.029409   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.029416   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:44.029421   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:44.029483   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:44.067415   80762 cri.go:89] found id: ""
	I0612 21:41:44.067449   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.067460   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:44.067468   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:44.067528   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:44.105093   80762 cri.go:89] found id: ""
	I0612 21:41:44.105117   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.105125   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:44.105131   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:44.105178   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:44.142647   80762 cri.go:89] found id: ""
	I0612 21:41:44.142680   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.142691   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:44.142699   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:44.142759   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:44.182725   80762 cri.go:89] found id: ""
	I0612 21:41:44.182756   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.182767   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:44.182775   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:44.182836   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:44.219538   80762 cri.go:89] found id: ""
	I0612 21:41:44.219568   80762 logs.go:276] 0 containers: []
	W0612 21:41:44.219580   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:44.219593   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:44.219608   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:44.272234   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:44.272267   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:44.285631   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:44.285663   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:44.362453   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:44.362470   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:44.362482   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:44.444624   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:44.444656   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.985731   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:46.999749   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:46.999819   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:47.035051   80762 cri.go:89] found id: ""
	I0612 21:41:47.035073   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.035080   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:47.035086   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:47.035136   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:47.077929   80762 cri.go:89] found id: ""
	I0612 21:41:47.077964   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.077975   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:47.077982   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:47.078062   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:47.111621   80762 cri.go:89] found id: ""
	I0612 21:41:47.111660   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.111671   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:47.111679   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:47.111744   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:47.150700   80762 cri.go:89] found id: ""
	I0612 21:41:47.150725   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.150733   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:47.150739   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:47.150787   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:47.189547   80762 cri.go:89] found id: ""
	I0612 21:41:47.189576   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.189586   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:47.189593   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:47.189660   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:47.229482   80762 cri.go:89] found id: ""
	I0612 21:41:47.229510   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.229522   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:47.229530   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:47.229599   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:47.266798   80762 cri.go:89] found id: ""
	I0612 21:41:47.266826   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.266837   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:47.266844   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:47.266906   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:47.302256   80762 cri.go:89] found id: ""
	I0612 21:41:47.302280   80762 logs.go:276] 0 containers: []
	W0612 21:41:47.302287   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:47.302295   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:47.302306   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:47.354485   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:47.354526   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:47.368689   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:47.368713   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:47.438219   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:47.438244   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:47.438257   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:47.514199   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:47.514227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:46.165541   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:48.664957   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:45.512922   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.513491   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.012630   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:47.817136   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.317083   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:50.056394   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:50.069348   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:50.069482   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:50.106057   80762 cri.go:89] found id: ""
	I0612 21:41:50.106087   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.106097   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:50.106104   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:50.106162   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:50.144532   80762 cri.go:89] found id: ""
	I0612 21:41:50.144560   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.144571   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:50.144579   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:50.144662   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:50.184549   80762 cri.go:89] found id: ""
	I0612 21:41:50.184575   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.184583   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:50.184588   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:50.184648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:50.228520   80762 cri.go:89] found id: ""
	I0612 21:41:50.228555   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.228569   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:50.228578   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:50.228649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:50.265697   80762 cri.go:89] found id: ""
	I0612 21:41:50.265726   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.265737   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:50.265744   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:50.265818   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:50.301353   80762 cri.go:89] found id: ""
	I0612 21:41:50.301393   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.301410   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:50.301416   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:50.301477   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:50.337273   80762 cri.go:89] found id: ""
	I0612 21:41:50.337298   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.337306   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:50.337313   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:50.337374   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:50.383090   80762 cri.go:89] found id: ""
	I0612 21:41:50.383116   80762 logs.go:276] 0 containers: []
	W0612 21:41:50.383126   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:50.383138   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:50.383151   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:50.454193   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:50.454240   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:50.477753   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:50.477779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:50.544052   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:50.544075   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:50.544091   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:50.626441   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:50.626480   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:50.666068   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.666287   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.013142   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.512869   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:52.318942   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:54.816918   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.818011   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:53.168599   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:53.181682   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:53.181764   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:53.228060   80762 cri.go:89] found id: ""
	I0612 21:41:53.228096   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.228107   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:53.228115   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:53.228176   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:53.264867   80762 cri.go:89] found id: ""
	I0612 21:41:53.264890   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.264898   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:53.264903   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:53.264950   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:53.298351   80762 cri.go:89] found id: ""
	I0612 21:41:53.298378   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.298386   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:53.298392   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:53.298448   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:53.335888   80762 cri.go:89] found id: ""
	I0612 21:41:53.335910   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.335917   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:53.335922   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:53.335980   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:53.376131   80762 cri.go:89] found id: ""
	I0612 21:41:53.376166   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.376175   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:53.376183   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:53.376240   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:53.412059   80762 cri.go:89] found id: ""
	I0612 21:41:53.412082   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.412088   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:53.412097   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:53.412142   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:53.446776   80762 cri.go:89] found id: ""
	I0612 21:41:53.446805   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.446816   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:53.446823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:53.446894   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:53.482411   80762 cri.go:89] found id: ""
	I0612 21:41:53.482433   80762 logs.go:276] 0 containers: []
	W0612 21:41:53.482441   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:53.482449   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:53.482460   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:53.522419   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:53.522448   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:53.573107   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:53.573141   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:53.587101   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:53.587147   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:53.665631   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:53.665660   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:53.665675   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.242482   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:56.255606   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:56.255682   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:56.290837   80762 cri.go:89] found id: ""
	I0612 21:41:56.290861   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.290872   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:56.290880   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:56.290938   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:56.325429   80762 cri.go:89] found id: ""
	I0612 21:41:56.325458   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.325466   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:56.325471   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:56.325534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:56.359809   80762 cri.go:89] found id: ""
	I0612 21:41:56.359835   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.359845   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:56.359852   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:56.359912   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:56.397775   80762 cri.go:89] found id: ""
	I0612 21:41:56.397803   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.397815   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:56.397823   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:56.397884   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:56.433917   80762 cri.go:89] found id: ""
	I0612 21:41:56.433945   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.433956   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:56.433963   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:56.434028   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:56.467390   80762 cri.go:89] found id: ""
	I0612 21:41:56.467419   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.467429   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:56.467438   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:56.467496   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:56.504014   80762 cri.go:89] found id: ""
	I0612 21:41:56.504048   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.504059   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:56.504067   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:56.504138   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:56.544157   80762 cri.go:89] found id: ""
	I0612 21:41:56.544187   80762 logs.go:276] 0 containers: []
	W0612 21:41:56.544198   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:56.544209   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:56.544224   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:56.595431   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:56.595462   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:56.608897   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:56.608936   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:56.682706   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:56.682735   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:56.682749   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:56.762598   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:56.762634   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:41:55.166152   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:57.167363   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.666265   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:56.514832   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:58.515091   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.317285   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.818345   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:41:59.302898   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:41:59.317901   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:41:59.317976   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:41:59.360136   80762 cri.go:89] found id: ""
	I0612 21:41:59.360164   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.360174   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:41:59.360181   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:41:59.360249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:41:59.397205   80762 cri.go:89] found id: ""
	I0612 21:41:59.397233   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.397244   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:41:59.397252   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:41:59.397312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:41:59.437063   80762 cri.go:89] found id: ""
	I0612 21:41:59.437093   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.437105   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:41:59.437113   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:41:59.437172   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:41:59.472800   80762 cri.go:89] found id: ""
	I0612 21:41:59.472827   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.472835   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:41:59.472843   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:41:59.472904   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:41:59.509452   80762 cri.go:89] found id: ""
	I0612 21:41:59.509474   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.509482   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:41:59.509487   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:41:59.509534   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:41:59.546121   80762 cri.go:89] found id: ""
	I0612 21:41:59.546151   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.546162   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:41:59.546170   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:41:59.546231   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:41:59.582983   80762 cri.go:89] found id: ""
	I0612 21:41:59.583007   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.583014   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:41:59.583020   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:41:59.583065   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:41:59.621110   80762 cri.go:89] found id: ""
	I0612 21:41:59.621148   80762 logs.go:276] 0 containers: []
	W0612 21:41:59.621160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:41:59.621171   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:41:59.621187   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:41:59.673113   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:41:59.673143   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:41:59.688106   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:41:59.688171   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:41:59.767653   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:41:59.767678   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:41:59.767692   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:41:59.848467   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:41:59.848507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.391324   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:02.406543   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:02.406621   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:02.442225   80762 cri.go:89] found id: ""
	I0612 21:42:02.442255   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.442265   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:02.442273   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:02.442341   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:02.479445   80762 cri.go:89] found id: ""
	I0612 21:42:02.479476   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.479487   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:02.479495   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:02.479557   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:02.517654   80762 cri.go:89] found id: ""
	I0612 21:42:02.517685   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.517697   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:02.517705   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:02.517775   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:02.562743   80762 cri.go:89] found id: ""
	I0612 21:42:02.562777   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.562788   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:02.562807   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:02.562873   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:02.597775   80762 cri.go:89] found id: ""
	I0612 21:42:02.597805   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.597816   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:02.597824   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:02.597886   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:02.633869   80762 cri.go:89] found id: ""
	I0612 21:42:02.633901   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.633913   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:02.633921   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:02.633979   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:02.671931   80762 cri.go:89] found id: ""
	I0612 21:42:02.671962   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.671974   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:02.671982   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:02.672044   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:02.709162   80762 cri.go:89] found id: ""
	I0612 21:42:02.709192   80762 logs.go:276] 0 containers: []
	W0612 21:42:02.709204   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:02.709214   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:02.709228   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:02.722937   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:02.722967   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:02.798249   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:02.798275   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:02.798292   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:02.165664   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.166215   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:01.012458   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:03.513414   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:04.317221   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.818062   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:02.876339   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:02.876376   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:02.913080   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:02.913106   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.464433   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:05.478249   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:05.478326   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:05.520742   80762 cri.go:89] found id: ""
	I0612 21:42:05.520765   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.520772   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:05.520778   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:05.520840   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:05.564864   80762 cri.go:89] found id: ""
	I0612 21:42:05.564896   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.564904   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:05.564911   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:05.564956   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:05.602917   80762 cri.go:89] found id: ""
	I0612 21:42:05.602942   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.602950   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:05.602956   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:05.603040   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:05.645073   80762 cri.go:89] found id: ""
	I0612 21:42:05.645104   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.645112   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:05.645119   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:05.645166   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:05.684133   80762 cri.go:89] found id: ""
	I0612 21:42:05.684165   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.684176   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:05.684184   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:05.684249   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:05.721461   80762 cri.go:89] found id: ""
	I0612 21:42:05.721489   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.721499   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:05.721506   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:05.721573   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:05.756710   80762 cri.go:89] found id: ""
	I0612 21:42:05.756744   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.756755   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:05.756763   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:05.756814   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:05.792182   80762 cri.go:89] found id: ""
	I0612 21:42:05.792210   80762 logs.go:276] 0 containers: []
	W0612 21:42:05.792220   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:05.792230   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:05.792245   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:05.836597   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:05.836632   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:05.888704   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:05.888742   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:05.903354   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:05.903387   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:05.976146   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:05.976169   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:05.976183   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:06.664789   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.666830   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:06.013885   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.512997   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:09.316398   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.317014   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:08.559612   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:08.573592   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:08.573648   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:08.613347   80762 cri.go:89] found id: ""
	I0612 21:42:08.613373   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.613381   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:08.613387   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:08.613449   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:08.650606   80762 cri.go:89] found id: ""
	I0612 21:42:08.650634   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.650643   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:08.650648   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:08.650692   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:08.687056   80762 cri.go:89] found id: ""
	I0612 21:42:08.687087   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.687097   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:08.687105   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:08.687191   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:08.723112   80762 cri.go:89] found id: ""
	I0612 21:42:08.723138   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.723146   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:08.723161   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:08.723238   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:08.764772   80762 cri.go:89] found id: ""
	I0612 21:42:08.764801   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.764812   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:08.764820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:08.764888   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:08.801914   80762 cri.go:89] found id: ""
	I0612 21:42:08.801944   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.801954   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:08.801962   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:08.802025   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:08.837991   80762 cri.go:89] found id: ""
	I0612 21:42:08.838017   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.838025   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:08.838030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:08.838084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:08.874977   80762 cri.go:89] found id: ""
	I0612 21:42:08.875016   80762 logs.go:276] 0 containers: []
	W0612 21:42:08.875027   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:08.875039   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:08.875058   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:08.931628   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:08.931659   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:08.946763   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:08.946791   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:09.028039   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:09.028061   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:09.028079   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:09.104350   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:09.104406   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.645114   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:11.659382   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:11.659455   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:11.702205   80762 cri.go:89] found id: ""
	I0612 21:42:11.702236   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.702246   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:11.702254   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:11.702309   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:11.748328   80762 cri.go:89] found id: ""
	I0612 21:42:11.748350   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.748357   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:11.748362   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:11.748408   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:11.788980   80762 cri.go:89] found id: ""
	I0612 21:42:11.789009   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.789020   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:11.789027   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:11.789083   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:11.829886   80762 cri.go:89] found id: ""
	I0612 21:42:11.829910   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.829920   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:11.829928   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:11.830006   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:11.870088   80762 cri.go:89] found id: ""
	I0612 21:42:11.870120   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.870131   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:11.870138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:11.870201   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:11.907862   80762 cri.go:89] found id: ""
	I0612 21:42:11.907893   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.907905   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:11.907913   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:11.907973   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:11.947773   80762 cri.go:89] found id: ""
	I0612 21:42:11.947798   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.947808   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:11.947816   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:11.947876   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:11.987806   80762 cri.go:89] found id: ""
	I0612 21:42:11.987837   80762 logs.go:276] 0 containers: []
	W0612 21:42:11.987848   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:11.987859   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:11.987878   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:12.043451   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:12.043481   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:12.057946   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:12.057980   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:12.134265   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:12.134298   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:12.134310   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:12.221276   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:12.221315   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:11.165305   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.165846   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:11.012728   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512292   80243 pod_ready.go:102] pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:13.512327   80243 pod_ready.go:81] duration metric: took 4m0.006424182s for pod "metrics-server-569cc877fc-xj4xk" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:13.512336   80243 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0612 21:42:13.512343   80243 pod_ready.go:38] duration metric: took 4m5.595554955s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:13.512359   80243 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:42:13.512383   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:13.512428   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:13.571855   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:13.571882   80243 cri.go:89] found id: ""
	I0612 21:42:13.571892   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:13.571942   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.576505   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:13.576557   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:13.614768   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:13.614792   80243 cri.go:89] found id: ""
	I0612 21:42:13.614799   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:13.614847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.619276   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:13.619342   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:13.662832   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:13.662856   80243 cri.go:89] found id: ""
	I0612 21:42:13.662866   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:13.662931   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.667956   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:13.668031   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:13.710456   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.710479   80243 cri.go:89] found id: ""
	I0612 21:42:13.710487   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:13.710540   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.715411   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:13.715480   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:13.759924   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.759952   80243 cri.go:89] found id: ""
	I0612 21:42:13.759965   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:13.760027   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.764854   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:13.764919   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:13.804802   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:13.804826   80243 cri.go:89] found id: ""
	I0612 21:42:13.804834   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:13.804891   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.809421   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:13.809489   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:13.846580   80243 cri.go:89] found id: ""
	I0612 21:42:13.846615   80243 logs.go:276] 0 containers: []
	W0612 21:42:13.846625   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:13.846633   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:13.846685   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:13.893480   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.893504   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:13.893510   80243 cri.go:89] found id: ""
	I0612 21:42:13.893523   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:13.893571   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.898530   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:13.905072   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:13.905100   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:13.942165   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:13.942195   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:13.981852   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:13.981882   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:14.018431   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:14.018457   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:14.067616   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:14.067645   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:14.082853   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:14.082886   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:14.220156   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:14.220188   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:14.274395   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:14.274430   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:14.319087   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:14.319116   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:14.834792   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:14.834831   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:14.893213   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:14.893252   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:14.957423   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:14.957466   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:15.013756   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:15.013803   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:13.318558   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:15.318904   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:14.760949   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:14.775242   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:14.775356   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:14.818818   80762 cri.go:89] found id: ""
	I0612 21:42:14.818847   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.818856   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:14.818863   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:14.818931   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:14.859106   80762 cri.go:89] found id: ""
	I0612 21:42:14.859146   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.859157   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:14.859164   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:14.859247   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:14.894993   80762 cri.go:89] found id: ""
	I0612 21:42:14.895016   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.895026   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:14.895037   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:14.895087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:14.943534   80762 cri.go:89] found id: ""
	I0612 21:42:14.943561   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.943572   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:14.943579   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:14.943645   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:14.985243   80762 cri.go:89] found id: ""
	I0612 21:42:14.985267   80762 logs.go:276] 0 containers: []
	W0612 21:42:14.985274   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:14.985280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:14.985328   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:15.029253   80762 cri.go:89] found id: ""
	I0612 21:42:15.029286   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.029297   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:15.029305   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:15.029371   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:15.063471   80762 cri.go:89] found id: ""
	I0612 21:42:15.063499   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.063510   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:15.063517   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:15.063580   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:15.101152   80762 cri.go:89] found id: ""
	I0612 21:42:15.101181   80762 logs.go:276] 0 containers: []
	W0612 21:42:15.101201   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:15.101212   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:15.101227   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:15.178398   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:15.178416   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:15.178429   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:15.255420   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:15.255468   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:15.295492   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:15.295519   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:15.345010   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:15.345051   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:15.166546   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.666141   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:19.672626   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.561453   80243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.579672   80243 api_server.go:72] duration metric: took 4m17.376220984s to wait for apiserver process to appear ...
	I0612 21:42:17.579702   80243 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:42:17.579741   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.579787   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.620290   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:17.620318   80243 cri.go:89] found id: ""
	I0612 21:42:17.620325   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:17.620387   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.624598   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.624658   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.665957   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:17.665985   80243 cri.go:89] found id: ""
	I0612 21:42:17.665995   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:17.666056   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.671143   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.671222   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:17.717377   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.717396   80243 cri.go:89] found id: ""
	I0612 21:42:17.717404   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:17.717459   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.721710   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:17.721774   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:17.762712   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:17.762739   80243 cri.go:89] found id: ""
	I0612 21:42:17.762749   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:17.762807   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.767258   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:17.767329   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:17.803905   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:17.803956   80243 cri.go:89] found id: ""
	I0612 21:42:17.803969   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:17.804034   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.808260   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:17.808323   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:17.847402   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:17.847432   80243 cri.go:89] found id: ""
	I0612 21:42:17.847441   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:17.847502   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.851685   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:17.851757   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:17.897026   80243 cri.go:89] found id: ""
	I0612 21:42:17.897051   80243 logs.go:276] 0 containers: []
	W0612 21:42:17.897059   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:17.897065   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:17.897122   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:17.953764   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:17.953793   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:17.953799   80243 cri.go:89] found id: ""
	I0612 21:42:17.953808   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:17.953875   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.959822   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:17.965103   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:17.965127   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:18.089205   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:18.089229   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:18.153823   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:18.153876   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:18.198010   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.198037   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.255380   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.255410   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.271692   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:18.271725   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:18.318018   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:18.318049   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:18.379352   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:18.379386   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:18.437854   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:18.437884   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:18.487618   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.487650   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.934735   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.934784   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:18.983010   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:18.983050   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:19.043569   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:19.043605   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:17.819422   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:20.315423   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:17.862640   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:17.879256   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:17.879333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:17.918910   80762 cri.go:89] found id: ""
	I0612 21:42:17.918940   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.918951   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:17.918958   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:17.919018   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:17.959701   80762 cri.go:89] found id: ""
	I0612 21:42:17.959726   80762 logs.go:276] 0 containers: []
	W0612 21:42:17.959734   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:17.959739   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:17.959820   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:18.005096   80762 cri.go:89] found id: ""
	I0612 21:42:18.005125   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.005142   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:18.005150   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:18.005211   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:18.046877   80762 cri.go:89] found id: ""
	I0612 21:42:18.046907   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.046919   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:18.046927   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:18.046992   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:18.087907   80762 cri.go:89] found id: ""
	I0612 21:42:18.087934   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.087946   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:18.087953   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:18.088016   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:18.139423   80762 cri.go:89] found id: ""
	I0612 21:42:18.139453   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.139464   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:18.139473   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:18.139535   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:18.180433   80762 cri.go:89] found id: ""
	I0612 21:42:18.180459   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.180469   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:18.180476   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:18.180537   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:18.220966   80762 cri.go:89] found id: ""
	I0612 21:42:18.220996   80762 logs.go:276] 0 containers: []
	W0612 21:42:18.221005   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:18.221015   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:18.221033   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:18.276006   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:18.276031   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:18.290975   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:18.291026   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:18.369318   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:18.369345   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:18.369359   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:18.451336   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:18.451380   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.016353   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:21.030544   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.030611   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.072558   80762 cri.go:89] found id: ""
	I0612 21:42:21.072583   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.072591   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:21.072596   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.072649   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.106320   80762 cri.go:89] found id: ""
	I0612 21:42:21.106352   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.106364   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:21.106372   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.106431   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.139155   80762 cri.go:89] found id: ""
	I0612 21:42:21.139201   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.139212   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:21.139220   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.139285   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.178731   80762 cri.go:89] found id: ""
	I0612 21:42:21.178762   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.178772   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:21.178779   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.178838   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.213606   80762 cri.go:89] found id: ""
	I0612 21:42:21.213635   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.213645   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:21.213652   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.213707   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.250966   80762 cri.go:89] found id: ""
	I0612 21:42:21.250993   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.251009   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:21.251017   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.251084   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.289434   80762 cri.go:89] found id: ""
	I0612 21:42:21.289457   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.289465   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.289474   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:21.289520   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:21.329028   80762 cri.go:89] found id: ""
	I0612 21:42:21.329058   80762 logs.go:276] 0 containers: []
	W0612 21:42:21.329069   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:21.329080   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:21.329098   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:21.342621   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:21.342648   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:21.418742   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:21.418766   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:21.418779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:21.493909   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:21.493944   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:21.534693   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:21.534723   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.165337   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.166122   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:21.581443   80243 api_server.go:253] Checking apiserver healthz at https://192.168.61.80:8444/healthz ...
	I0612 21:42:21.586756   80243 api_server.go:279] https://192.168.61.80:8444/healthz returned 200:
	ok
	I0612 21:42:21.587670   80243 api_server.go:141] control plane version: v1.30.1
	I0612 21:42:21.587691   80243 api_server.go:131] duration metric: took 4.007982669s to wait for apiserver health ...
	I0612 21:42:21.587699   80243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:42:21.587720   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:21.587761   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:21.627942   80243 cri.go:89] found id: "5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:21.627965   80243 cri.go:89] found id: ""
	I0612 21:42:21.627974   80243 logs.go:276] 1 containers: [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249]
	I0612 21:42:21.628036   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.632308   80243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:21.632380   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:21.674453   80243 cri.go:89] found id: "d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:21.674474   80243 cri.go:89] found id: ""
	I0612 21:42:21.674482   80243 logs.go:276] 1 containers: [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1]
	I0612 21:42:21.674539   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.679303   80243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:21.679376   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:21.717454   80243 cri.go:89] found id: "9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:21.717483   80243 cri.go:89] found id: ""
	I0612 21:42:21.717492   80243 logs.go:276] 1 containers: [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266]
	I0612 21:42:21.717555   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.722113   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:21.722176   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:21.758752   80243 cri.go:89] found id: "74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:21.758780   80243 cri.go:89] found id: ""
	I0612 21:42:21.758790   80243 logs.go:276] 1 containers: [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f]
	I0612 21:42:21.758847   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.763397   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:21.763465   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:21.802552   80243 cri.go:89] found id: "976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:21.802574   80243 cri.go:89] found id: ""
	I0612 21:42:21.802583   80243 logs.go:276] 1 containers: [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd]
	I0612 21:42:21.802641   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.807570   80243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:21.807633   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:21.855093   80243 cri.go:89] found id: "73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:21.855118   80243 cri.go:89] found id: ""
	I0612 21:42:21.855128   80243 logs.go:276] 1 containers: [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031]
	I0612 21:42:21.855212   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.860163   80243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:21.860231   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:21.907934   80243 cri.go:89] found id: ""
	I0612 21:42:21.907960   80243 logs.go:276] 0 containers: []
	W0612 21:42:21.907969   80243 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:21.907977   80243 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0612 21:42:21.908046   80243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0612 21:42:21.950085   80243 cri.go:89] found id: "2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:21.950114   80243 cri.go:89] found id: "58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:21.950120   80243 cri.go:89] found id: ""
	I0612 21:42:21.950128   80243 logs.go:276] 2 containers: [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70]
	I0612 21:42:21.950186   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.955633   80243 ssh_runner.go:195] Run: which crictl
	I0612 21:42:21.960017   80243 logs.go:123] Gathering logs for etcd [d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1] ...
	I0612 21:42:21.960038   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d482ceea3aaf0374d10e70f6e8621cdac6e6c5390167452dbbe48353c9f6b1c1"
	I0612 21:42:22.015659   80243 logs.go:123] Gathering logs for kube-controller-manager [73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031] ...
	I0612 21:42:22.015708   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73a7a9216e1bd21b98d3794d6ad179497be758f7effc4b022d976e52b7496031"
	I0612 21:42:22.074063   80243 logs.go:123] Gathering logs for storage-provisioner [2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b] ...
	I0612 21:42:22.074093   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ec17a45953ec9823cf13066c51caccdbdbb00bc01614cd3544ce4f012c0249b"
	I0612 21:42:22.113545   80243 logs.go:123] Gathering logs for storage-provisioner [58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70] ...
	I0612 21:42:22.113581   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58692ec525480d5f9e26557e9d2a116208b6b2390e3164770389edfdc8ca2a70"
	I0612 21:42:22.152550   80243 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:22.152583   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:22.556816   80243 logs.go:123] Gathering logs for container status ...
	I0612 21:42:22.556856   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:22.602506   80243 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:22.602542   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:22.655545   80243 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:22.655577   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 21:42:22.775731   80243 logs.go:123] Gathering logs for kube-apiserver [5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249] ...
	I0612 21:42:22.775775   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a2481a728ef87c8bd3c887883f36abd51f1dcc78ac063c177ae51fc71f91249"
	I0612 21:42:22.827447   80243 logs.go:123] Gathering logs for coredns [9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266] ...
	I0612 21:42:22.827476   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9247a0b60b2357fae00ee269e93e763da9596736a97c790cef4ac2a13f15f266"
	I0612 21:42:22.864866   80243 logs.go:123] Gathering logs for kube-scheduler [74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f] ...
	I0612 21:42:22.864898   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74488395e0d904b4a900828c7814c685611528e0a85560729d44514f9e499c5f"
	I0612 21:42:22.903885   80243 logs.go:123] Gathering logs for kube-proxy [976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd] ...
	I0612 21:42:22.903912   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976fbe2261bae7de259f902e60dc0528f6d0f09adc0403d5ce17da5824c5f6fd"
	I0612 21:42:22.947166   80243 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:22.947214   80243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:25.472711   80243 system_pods.go:59] 8 kube-system pods found
	I0612 21:42:25.472743   80243 system_pods.go:61] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.472750   80243 system_pods.go:61] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.472755   80243 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.472761   80243 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.472765   80243 system_pods.go:61] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.472770   80243 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.472777   80243 system_pods.go:61] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.472783   80243 system_pods.go:61] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.472794   80243 system_pods.go:74] duration metric: took 3.885088008s to wait for pod list to return data ...
	I0612 21:42:25.472803   80243 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:42:25.475046   80243 default_sa.go:45] found service account: "default"
	I0612 21:42:25.475072   80243 default_sa.go:55] duration metric: took 2.260179ms for default service account to be created ...
	I0612 21:42:25.475082   80243 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:42:25.479903   80243 system_pods.go:86] 8 kube-system pods found
	I0612 21:42:25.479925   80243 system_pods.go:89] "coredns-7db6d8ff4d-cllsk" [85e26b02-5b11-490e-a1b9-0f12c5ba3830] Running
	I0612 21:42:25.479931   80243 system_pods.go:89] "etcd-default-k8s-diff-port-376087" [c194b5d6-c5ce-419c-9680-a97b6036d50e] Running
	I0612 21:42:25.479935   80243 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-376087" [06340eda-8ec8-4347-800a-6553ec208886] Running
	I0612 21:42:25.479940   80243 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-376087" [c7ee978b-c4d4-474f-b92c-f20616f56799] Running
	I0612 21:42:25.479944   80243 system_pods.go:89] "kube-proxy-8lrgv" [98f9342e-2677-44be-8e22-2a8f45feeb57] Running
	I0612 21:42:25.479950   80243 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-376087" [e1083e32-0c06-4109-9c2f-ca1c8d06416c] Running
	I0612 21:42:25.479959   80243 system_pods.go:89] "metrics-server-569cc877fc-xj4xk" [d3ac0cb2-602d-489c-baeb-fa9a363de8af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:42:25.479969   80243 system_pods.go:89] "storage-provisioner" [52007a01-3640-4f32-8a4b-94e6a2e849b0] Running
	I0612 21:42:25.479979   80243 system_pods.go:126] duration metric: took 4.890624ms to wait for k8s-apps to be running ...
	I0612 21:42:25.479990   80243 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:42:25.480037   80243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:25.496529   80243 system_svc.go:56] duration metric: took 16.534285ms WaitForService to wait for kubelet
	I0612 21:42:25.496549   80243 kubeadm.go:576] duration metric: took 4m25.293104149s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:42:25.496565   80243 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:42:25.499277   80243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:42:25.499293   80243 node_conditions.go:123] node cpu capacity is 2
	I0612 21:42:25.499304   80243 node_conditions.go:105] duration metric: took 2.734965ms to run NodePressure ...
	I0612 21:42:25.499314   80243 start.go:240] waiting for startup goroutines ...
	I0612 21:42:25.499320   80243 start.go:245] waiting for cluster config update ...
	I0612 21:42:25.499339   80243 start.go:254] writing updated cluster config ...
	I0612 21:42:25.499628   80243 ssh_runner.go:195] Run: rm -f paused
	I0612 21:42:25.547780   80243 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:42:25.549693   80243 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-376087" cluster and "default" namespace by default
	I0612 21:42:22.317078   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.317826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:26.818102   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:24.086466   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:24.101820   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:24.101877   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:24.145732   80762 cri.go:89] found id: ""
	I0612 21:42:24.145757   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.145767   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:24.145774   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:24.145832   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:24.182765   80762 cri.go:89] found id: ""
	I0612 21:42:24.182788   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.182795   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:24.182801   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:24.182889   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:24.235093   80762 cri.go:89] found id: ""
	I0612 21:42:24.235121   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.235129   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:24.235134   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:24.235208   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:24.269788   80762 cri.go:89] found id: ""
	I0612 21:42:24.269809   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.269816   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:24.269822   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:24.269867   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:24.306594   80762 cri.go:89] found id: ""
	I0612 21:42:24.306620   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.306628   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:24.306634   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:24.306693   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:24.343766   80762 cri.go:89] found id: ""
	I0612 21:42:24.343786   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.343795   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:24.343802   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:24.343858   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:24.384417   80762 cri.go:89] found id: ""
	I0612 21:42:24.384447   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.384457   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:24.384464   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:24.384524   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:24.424935   80762 cri.go:89] found id: ""
	I0612 21:42:24.424958   80762 logs.go:276] 0 containers: []
	W0612 21:42:24.424965   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:24.424974   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:24.424988   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:24.499737   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:24.499771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:24.537631   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:24.537667   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:24.593743   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:24.593779   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:24.608078   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:24.608107   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:24.679729   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.180828   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:27.195484   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:27.195552   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:27.235725   80762 cri.go:89] found id: ""
	I0612 21:42:27.235750   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.235761   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:27.235768   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:27.235816   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:27.279763   80762 cri.go:89] found id: ""
	I0612 21:42:27.279795   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.279806   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:27.279814   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:27.279875   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:27.320510   80762 cri.go:89] found id: ""
	I0612 21:42:27.320534   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.320543   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:27.320554   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:27.320641   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:27.359195   80762 cri.go:89] found id: ""
	I0612 21:42:27.359227   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.359239   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:27.359247   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:27.359312   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:27.394977   80762 cri.go:89] found id: ""
	I0612 21:42:27.395004   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.395015   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:27.395033   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:27.395099   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:27.431905   80762 cri.go:89] found id: ""
	I0612 21:42:27.431925   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.431933   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:27.431945   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:27.431990   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:27.469929   80762 cri.go:89] found id: ""
	I0612 21:42:27.469954   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.469961   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:27.469967   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:27.470024   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:27.505128   80762 cri.go:89] found id: ""
	I0612 21:42:27.505153   80762 logs.go:276] 0 containers: []
	W0612 21:42:27.505160   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:27.505169   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:27.505180   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:27.556739   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:27.556771   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:27.572730   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:27.572757   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:27.646797   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:27.646819   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:27.646836   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:27.726554   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:27.726599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:26.665496   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.166323   80404 pod_ready.go:102] pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:29.316302   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:31.316334   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:30.268770   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:30.282575   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:30.282635   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:30.321243   80762 cri.go:89] found id: ""
	I0612 21:42:30.321276   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.321288   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:30.321295   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:30.321342   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:30.359403   80762 cri.go:89] found id: ""
	I0612 21:42:30.359432   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.359443   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:30.359451   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:30.359505   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:30.395967   80762 cri.go:89] found id: ""
	I0612 21:42:30.396006   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.396015   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:30.396028   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:30.396087   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:30.438093   80762 cri.go:89] found id: ""
	I0612 21:42:30.438123   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.438132   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:30.438138   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:30.438192   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:30.476859   80762 cri.go:89] found id: ""
	I0612 21:42:30.476888   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.476898   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:30.476905   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:30.476968   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:30.512998   80762 cri.go:89] found id: ""
	I0612 21:42:30.513026   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.513037   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:30.513045   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:30.513106   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:30.548822   80762 cri.go:89] found id: ""
	I0612 21:42:30.548847   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.548855   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:30.548861   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:30.548908   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:30.584385   80762 cri.go:89] found id: ""
	I0612 21:42:30.584417   80762 logs.go:276] 0 containers: []
	W0612 21:42:30.584426   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:30.584439   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:30.584454   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:30.685995   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:30.686019   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:30.686030   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:30.778789   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:30.778827   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:30.819467   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:30.819511   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:30.872563   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:30.872599   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:31.659828   80404 pod_ready.go:81] duration metric: took 4m0.000909177s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" ...
	E0612 21:42:31.659857   80404 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bkhxn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:42:31.659875   80404 pod_ready.go:38] duration metric: took 4m13.021158077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:42:31.659904   80404 kubeadm.go:591] duration metric: took 4m20.257268424s to restartPrimaryControlPlane
	W0612 21:42:31.659968   80404 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:31.660002   80404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:33.316457   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:35.316525   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:33.387831   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:33.401663   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:42:33.401740   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:42:33.439690   80762 cri.go:89] found id: ""
	I0612 21:42:33.439723   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.439735   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:42:33.439743   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:42:33.439805   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:42:33.480330   80762 cri.go:89] found id: ""
	I0612 21:42:33.480357   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.480365   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:42:33.480371   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:42:33.480422   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:42:33.520367   80762 cri.go:89] found id: ""
	I0612 21:42:33.520396   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.520407   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:42:33.520415   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:42:33.520476   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:42:33.556859   80762 cri.go:89] found id: ""
	I0612 21:42:33.556892   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.556904   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:42:33.556911   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:42:33.556963   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:42:33.595982   80762 cri.go:89] found id: ""
	I0612 21:42:33.596014   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.596024   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:42:33.596030   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:42:33.596091   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:42:33.630942   80762 cri.go:89] found id: ""
	I0612 21:42:33.630974   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.630986   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:42:33.630994   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:42:33.631055   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:42:33.671649   80762 cri.go:89] found id: ""
	I0612 21:42:33.671676   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.671684   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:42:33.671690   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:42:33.671734   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:42:33.716664   80762 cri.go:89] found id: ""
	I0612 21:42:33.716690   80762 logs.go:276] 0 containers: []
	W0612 21:42:33.716700   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:42:33.716711   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:42:33.716726   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 21:42:33.734168   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:42:33.734198   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:42:33.826469   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:42:33.826491   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:42:33.826507   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:42:33.915109   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:42:33.915142   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:42:33.957969   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:42:33.958007   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:42:36.515258   80762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:42:36.529603   80762 kubeadm.go:591] duration metric: took 4m4.234271001s to restartPrimaryControlPlane
	W0612 21:42:36.529688   80762 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:42:36.529719   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:42:37.316720   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:39.317633   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.816783   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:41.545629   80762 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.01588354s)
	I0612 21:42:41.545734   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:42:41.561025   80762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:42:41.572788   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:42:41.583027   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:42:41.583052   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:42:41.583095   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:42:41.593433   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:42:41.593502   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:42:41.603944   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:42:41.613382   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:42:41.613432   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:42:41.622874   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.632270   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:42:41.632370   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:42:41.642072   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:42:41.652120   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:42:41.652194   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:42:41.662684   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:42:41.894903   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:42:43.817122   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:45.817164   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:47.817201   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:50.316134   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:52.317090   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:54.318066   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:56.816196   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:42:58.817948   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:01.316826   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:03.728120   80404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.068094257s)
	I0612 21:43:03.728183   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:03.744990   80404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:03.755365   80404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:03.765154   80404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:03.765181   80404 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:03.765226   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:03.775246   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:03.775304   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:03.785389   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:03.794999   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:03.795046   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:03.804771   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.814137   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:03.814187   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:03.824449   80404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:03.833631   80404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:03.833687   80404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:03.843203   80404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:03.895827   80404 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:03.895927   80404 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:04.040495   80404 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:04.040666   80404 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:04.040822   80404 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:04.252894   80404 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:04.254835   80404 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:04.254952   80404 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:04.255060   80404 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:04.255219   80404 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:04.255296   80404 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:04.255399   80404 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:04.255490   80404 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:04.255589   80404 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:04.255692   80404 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:04.255794   80404 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:04.255885   80404 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:04.255923   80404 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:04.255978   80404 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:04.460505   80404 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:04.640215   80404 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:04.722455   80404 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:04.862670   80404 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:05.112478   80404 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:05.113163   80404 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:05.115573   80404 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:03.817386   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:06.317207   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:05.117650   80404 out.go:204]   - Booting up control plane ...
	I0612 21:43:05.117758   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:05.117887   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:05.119410   80404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:05.139412   80404 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:05.139504   80404 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:05.139575   80404 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:05.268539   80404 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:05.268636   80404 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:43:05.771267   80404 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.898809ms
	I0612 21:43:05.771364   80404 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:43:11.274484   80404 kubeadm.go:309] [api-check] The API server is healthy after 5.503111655s
	I0612 21:43:11.291273   80404 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:43:11.319349   80404 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:43:11.357447   80404 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:43:11.357709   80404 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-591460 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:43:11.368936   80404 kubeadm.go:309] [bootstrap-token] Using token: 0iiegq.ujvrnknfmyshffxu
	I0612 21:43:08.816875   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:10.817031   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:11.370411   80404 out.go:204]   - Configuring RBAC rules ...
	I0612 21:43:11.370567   80404 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:43:11.375891   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:43:11.388345   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:43:11.392726   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:43:11.396867   80404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:43:11.401212   80404 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:43:11.683506   80404 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:43:12.114832   80404 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:43:12.683696   80404 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:43:12.683724   80404 kubeadm.go:309] 
	I0612 21:43:12.683811   80404 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:43:12.683823   80404 kubeadm.go:309] 
	I0612 21:43:12.683938   80404 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:43:12.683958   80404 kubeadm.go:309] 
	I0612 21:43:12.684002   80404 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:43:12.684070   80404 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:43:12.684129   80404 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:43:12.684146   80404 kubeadm.go:309] 
	I0612 21:43:12.684232   80404 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:43:12.684247   80404 kubeadm.go:309] 
	I0612 21:43:12.684317   80404 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:43:12.684330   80404 kubeadm.go:309] 
	I0612 21:43:12.684398   80404 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:43:12.684502   80404 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:43:12.684595   80404 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:43:12.684604   80404 kubeadm.go:309] 
	I0612 21:43:12.684700   80404 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:43:12.684807   80404 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:43:12.684816   80404 kubeadm.go:309] 
	I0612 21:43:12.684915   80404 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685061   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:43:12.685105   80404 kubeadm.go:309] 	--control-plane 
	I0612 21:43:12.685116   80404 kubeadm.go:309] 
	I0612 21:43:12.685237   80404 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:43:12.685248   80404 kubeadm.go:309] 
	I0612 21:43:12.685352   80404 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0iiegq.ujvrnknfmyshffxu \
	I0612 21:43:12.685509   80404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:43:12.685622   80404 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:43:12.685831   80404 cni.go:84] Creating CNI manager for ""
	I0612 21:43:12.685848   80404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:43:12.687835   80404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:43:12.689100   80404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:43:12.700384   80404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:43:12.720228   80404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:43:12.720305   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.720330   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-591460 minikube.k8s.io/updated_at=2024_06_12T21_43_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=embed-certs-591460 minikube.k8s.io/primary=true
	I0612 21:43:12.751866   80404 ops.go:34] apiserver oom_adj: -16
	I0612 21:43:12.927644   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.428393   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:13.928221   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:14.428286   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:12.817125   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:15.316899   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:14.928273   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.428431   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:15.927968   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.428202   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:16.927882   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.428544   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.927844   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.428385   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:18.928105   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:19.428421   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:17.317080   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.317419   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:21.816670   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:19.928638   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.428310   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:20.928565   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.428377   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:21.928158   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.428356   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:22.927863   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.427955   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:23.928226   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.427823   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:24.928404   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.428367   80404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:43:25.514417   80404 kubeadm.go:1107] duration metric: took 12.794169259s to wait for elevateKubeSystemPrivileges
	W0612 21:43:25.514460   80404 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:43:25.514470   80404 kubeadm.go:393] duration metric: took 5m14.162212832s to StartCluster
	I0612 21:43:25.514490   80404 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.514576   80404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:43:25.518597   80404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:43:25.518811   80404 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:43:25.520571   80404 out.go:177] * Verifying Kubernetes components...
	I0612 21:43:25.518903   80404 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:43:25.519030   80404 config.go:182] Loaded profile config "embed-certs-591460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:43:25.521967   80404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:43:25.522001   80404 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-591460"
	I0612 21:43:25.522043   80404 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-591460"
	W0612 21:43:25.522056   80404 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:43:25.522053   80404 addons.go:69] Setting default-storageclass=true in profile "embed-certs-591460"
	I0612 21:43:25.522089   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522100   80404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-591460"
	I0612 21:43:25.522089   80404 addons.go:69] Setting metrics-server=true in profile "embed-certs-591460"
	I0612 21:43:25.522158   80404 addons.go:234] Setting addon metrics-server=true in "embed-certs-591460"
	W0612 21:43:25.522170   80404 addons.go:243] addon metrics-server should already be in state true
	I0612 21:43:25.522196   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.522502   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522509   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522532   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522535   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.522585   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.522611   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.538989   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0612 21:43:25.539032   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0612 21:43:25.539591   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.539592   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.540199   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540222   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540293   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.540323   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.540610   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.540736   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.541265   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541281   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.541312   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.541431   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.542393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0612 21:43:25.543042   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.543604   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.543643   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.543997   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.544208   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.547823   80404 addons.go:234] Setting addon default-storageclass=true in "embed-certs-591460"
	W0612 21:43:25.547849   80404 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:43:25.547878   80404 host.go:66] Checking if "embed-certs-591460" exists ...
	I0612 21:43:25.548237   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.548272   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.558486   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46589
	I0612 21:43:25.558934   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.559936   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.559967   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.560387   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.560600   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.560728   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0612 21:43:25.561116   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.561595   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.561610   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.561928   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.562198   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.562832   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565065   80404 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:43:25.563946   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.565393   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0612 21:43:25.566521   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:43:25.566535   80404 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:43:25.566582   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.568114   80404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:43:24.316660   80157 pod_ready.go:102] pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace has status "Ready":"False"
	I0612 21:43:25.810857   80157 pod_ready.go:81] duration metric: took 4m0.000926725s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" ...
	E0612 21:43:25.810888   80157 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d5mj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0612 21:43:25.810936   80157 pod_ready.go:38] duration metric: took 4m14.539121336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.810971   80157 kubeadm.go:591] duration metric: took 4m21.56451584s to restartPrimaryControlPlane
	W0612 21:43:25.811042   80157 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0612 21:43:25.811074   80157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:43:25.567032   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.569772   80404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:25.569794   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:43:25.569812   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.570271   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.570291   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.570363   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.570698   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.571498   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.571514   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.571539   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.571691   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.571861   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.572032   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.572851   80404 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:43:25.572894   80404 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:43:25.573962   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574403   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.574429   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.574762   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.574974   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.575164   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.575464   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.589637   80404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I0612 21:43:25.590155   80404 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:43:25.591035   80404 main.go:141] libmachine: Using API Version  1
	I0612 21:43:25.591059   80404 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:43:25.591596   80404 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:43:25.591845   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetState
	I0612 21:43:25.593885   80404 main.go:141] libmachine: (embed-certs-591460) Calling .DriverName
	I0612 21:43:25.594095   80404 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:25.594112   80404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:43:25.594131   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHHostname
	I0612 21:43:25.597769   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598347   80404 main.go:141] libmachine: (embed-certs-591460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:f7:d9", ip: ""} in network mk-embed-certs-591460: {Iface:virbr1 ExpiryTime:2024-06-12 22:37:56 +0000 UTC Type:0 Mac:52:54:00:41:f7:d9 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-591460 Clientid:01:52:54:00:41:f7:d9}
	I0612 21:43:25.598379   80404 main.go:141] libmachine: (embed-certs-591460) DBG | domain embed-certs-591460 has defined IP address 192.168.39.147 and MAC address 52:54:00:41:f7:d9 in network mk-embed-certs-591460
	I0612 21:43:25.598434   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHPort
	I0612 21:43:25.598635   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHKeyPath
	I0612 21:43:25.598766   80404 main.go:141] libmachine: (embed-certs-591460) Calling .GetSSHUsername
	I0612 21:43:25.598860   80404 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/embed-certs-591460/id_rsa Username:docker}
	I0612 21:43:25.762134   80404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:43:25.818663   80404 node_ready.go:35] waiting up to 6m0s for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830753   80404 node_ready.go:49] node "embed-certs-591460" has status "Ready":"True"
	I0612 21:43:25.830780   80404 node_ready.go:38] duration metric: took 12.086962ms for node "embed-certs-591460" to be "Ready" ...
	I0612 21:43:25.830792   80404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:25.841084   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:25.929395   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:43:25.929427   80404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:43:26.001489   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:43:26.016234   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:43:26.016275   80404 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:43:26.030851   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:43:26.062707   80404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:26.062741   80404 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:43:26.157461   80404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:43:27.281342   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.279809959s)
	I0612 21:43:27.281364   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.250478112s)
	I0612 21:43:27.281392   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281405   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281408   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281420   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281712   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281730   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281739   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281748   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.281861   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.281916   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.281933   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.281942   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.283567   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283582   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283592   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.283597   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.283728   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.283740   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.324600   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.324625   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.324937   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.324941   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.324965   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.366096   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:27.366126   80404 pod_ready.go:81] duration metric: took 1.52501871s for pod "coredns-7db6d8ff4d-fpf5q" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.366139   80404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:27.530900   80404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.373391416s)
	I0612 21:43:27.530973   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.530987   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.531382   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.531399   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.531406   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.531419   80404 main.go:141] libmachine: Making call to close driver server
	I0612 21:43:27.531428   80404 main.go:141] libmachine: (embed-certs-591460) Calling .Close
	I0612 21:43:27.533199   80404 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:43:27.533212   80404 main.go:141] libmachine: (embed-certs-591460) DBG | Closing plugin on server side
	I0612 21:43:27.533226   80404 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:43:27.533238   80404 addons.go:475] Verifying addon metrics-server=true in "embed-certs-591460"
	I0612 21:43:27.534895   80404 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:43:27.536129   80404 addons.go:510] duration metric: took 2.017228253s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:43:28.373835   80404 pod_ready.go:92] pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.373862   80404 pod_ready.go:81] duration metric: took 1.007715736s for pod "coredns-7db6d8ff4d-hs7zn" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.373870   80404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379042   80404 pod_ready.go:92] pod "etcd-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.379065   80404 pod_ready.go:81] duration metric: took 5.188395ms for pod "etcd-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.379078   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384218   80404 pod_ready.go:92] pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.384233   80404 pod_ready.go:81] duration metric: took 5.148944ms for pod "kube-apiserver-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.384241   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389023   80404 pod_ready.go:92] pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.389046   80404 pod_ready.go:81] duration metric: took 4.78947ms for pod "kube-controller-manager-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.389056   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623880   80404 pod_ready.go:92] pod "kube-proxy-5l2wz" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:28.623902   80404 pod_ready.go:81] duration metric: took 234.83854ms for pod "kube-proxy-5l2wz" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:28.623910   80404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022477   80404 pod_ready.go:92] pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace has status "Ready":"True"
	I0612 21:43:29.022508   80404 pod_ready.go:81] duration metric: took 398.590821ms for pod "kube-scheduler-embed-certs-591460" in "kube-system" namespace to be "Ready" ...
	I0612 21:43:29.022522   80404 pod_ready.go:38] duration metric: took 3.191712664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:43:29.022539   80404 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:43:29.022602   80404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:43:29.038776   80404 api_server.go:72] duration metric: took 3.51993276s to wait for apiserver process to appear ...
	I0612 21:43:29.038805   80404 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:43:29.038827   80404 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0612 21:43:29.045455   80404 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0612 21:43:29.047050   80404 api_server.go:141] control plane version: v1.30.1
	I0612 21:43:29.047072   80404 api_server.go:131] duration metric: took 8.260077ms to wait for apiserver health ...
	I0612 21:43:29.047080   80404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:43:29.226569   80404 system_pods.go:59] 9 kube-system pods found
	I0612 21:43:29.226603   80404 system_pods.go:61] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.226611   80404 system_pods.go:61] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.226618   80404 system_pods.go:61] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.226624   80404 system_pods.go:61] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.226631   80404 system_pods.go:61] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.226636   80404 system_pods.go:61] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.226642   80404 system_pods.go:61] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.226652   80404 system_pods.go:61] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.226659   80404 system_pods.go:61] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.226671   80404 system_pods.go:74] duration metric: took 179.583899ms to wait for pod list to return data ...
	I0612 21:43:29.226684   80404 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:43:29.422244   80404 default_sa.go:45] found service account: "default"
	I0612 21:43:29.422278   80404 default_sa.go:55] duration metric: took 195.585835ms for default service account to be created ...
	I0612 21:43:29.422290   80404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:43:29.626614   80404 system_pods.go:86] 9 kube-system pods found
	I0612 21:43:29.626650   80404 system_pods.go:89] "coredns-7db6d8ff4d-fpf5q" [1091154b-ef24-4447-b294-03f8d704f37e] Running
	I0612 21:43:29.626659   80404 system_pods.go:89] "coredns-7db6d8ff4d-hs7zn" [d8af54bf-17f9-48fe-a770-536c2313bc2a] Running
	I0612 21:43:29.626667   80404 system_pods.go:89] "etcd-embed-certs-591460" [bc7ad3a2-6cb6-4c32-94a7-20f6e3337b86] Running
	I0612 21:43:29.626673   80404 system_pods.go:89] "kube-apiserver-embed-certs-591460" [94b14cb3-5c3d-4be7-b5dc-3259d1fac58c] Running
	I0612 21:43:29.626680   80404 system_pods.go:89] "kube-controller-manager-embed-certs-591460" [c66f1ad8-df77-466e-9bbf-292e0937c7df] Running
	I0612 21:43:29.626687   80404 system_pods.go:89] "kube-proxy-5l2wz" [7130c7fb-880b-4a7b-937d-3980c89f217a] Running
	I0612 21:43:29.626693   80404 system_pods.go:89] "kube-scheduler-embed-certs-591460" [a02c9ded-942d-4107-a8f5-878a7924f1a4] Running
	I0612 21:43:29.626703   80404 system_pods.go:89] "metrics-server-569cc877fc-r7fbt" [e33a1ff8-3032-4be5-8b6a-3eedfbb92611] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:43:29.626714   80404 system_pods.go:89] "storage-provisioner" [ade8816b-866c-4ba3-9665-fc9b144a4286] Running
	I0612 21:43:29.626725   80404 system_pods.go:126] duration metric: took 204.428087ms to wait for k8s-apps to be running ...
	I0612 21:43:29.626737   80404 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:43:29.626793   80404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:29.642423   80404 system_svc.go:56] duration metric: took 15.67694ms WaitForService to wait for kubelet
	I0612 21:43:29.642457   80404 kubeadm.go:576] duration metric: took 4.123619864s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:43:29.642481   80404 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:43:29.825804   80404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:43:29.825833   80404 node_conditions.go:123] node cpu capacity is 2
	I0612 21:43:29.825846   80404 node_conditions.go:105] duration metric: took 183.359091ms to run NodePressure ...
	I0612 21:43:29.825860   80404 start.go:240] waiting for startup goroutines ...
	I0612 21:43:29.825868   80404 start.go:245] waiting for cluster config update ...
	I0612 21:43:29.825881   80404 start.go:254] writing updated cluster config ...
	I0612 21:43:29.826229   80404 ssh_runner.go:195] Run: rm -f paused
	I0612 21:43:29.878580   80404 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:43:29.880438   80404 out.go:177] * Done! kubectl is now configured to use "embed-certs-591460" cluster and "default" namespace by default
	I0612 21:43:57.924825   80157 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.113719509s)
	I0612 21:43:57.924912   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:43:57.942507   80157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 21:43:57.953901   80157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:43:57.964374   80157 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:43:57.964396   80157 kubeadm.go:156] found existing configuration files:
	
	I0612 21:43:57.964439   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:43:57.974281   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:43:57.974366   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:43:57.985000   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:43:57.995268   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:43:57.995346   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:43:58.005482   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.015598   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:43:58.015659   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:43:58.028582   80157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:43:58.038706   80157 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:43:58.038756   80157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:43:58.051818   80157 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:43:58.110576   80157 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 21:43:58.110645   80157 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:43:58.274454   80157 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:43:58.274625   80157 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:43:58.274751   80157 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:43:58.484837   80157 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:43:58.486643   80157 out.go:204]   - Generating certificates and keys ...
	I0612 21:43:58.486753   80157 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:43:58.486845   80157 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:43:58.486963   80157 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:43:58.487058   80157 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:43:58.487192   80157 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:43:58.487283   80157 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:43:58.487368   80157 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:43:58.487452   80157 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:43:58.487559   80157 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:43:58.487653   80157 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:43:58.487728   80157 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:43:58.487826   80157 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:43:58.644916   80157 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:43:58.789369   80157 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 21:43:58.924153   80157 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:43:59.044332   80157 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:43:59.352910   80157 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:43:59.353462   80157 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:43:59.356967   80157 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:43:59.359470   80157 out.go:204]   - Booting up control plane ...
	I0612 21:43:59.359596   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:43:59.359687   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:43:59.359792   80157 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:43:59.378280   80157 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:43:59.379149   80157 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:43:59.379240   80157 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:43:59.521694   80157 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 21:43:59.521775   80157 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 21:44:00.036696   80157 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 514.972931ms
	I0612 21:44:00.036836   80157 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 21:44:05.539363   80157 kubeadm.go:309] [api-check] The API server is healthy after 5.502859715s
	I0612 21:44:05.552779   80157 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 21:44:05.567296   80157 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 21:44:05.603398   80157 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 21:44:05.603707   80157 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-087875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 21:44:05.619311   80157 kubeadm.go:309] [bootstrap-token] Using token: x2knjj.1kuv2wdowwsbztfg
	I0612 21:44:05.621026   80157 out.go:204]   - Configuring RBAC rules ...
	I0612 21:44:05.621180   80157 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 21:44:05.628474   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 21:44:05.642438   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 21:44:05.647606   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 21:44:05.651982   80157 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 21:44:05.656129   80157 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 21:44:05.947680   80157 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 21:44:06.430716   80157 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 21:44:06.950446   80157 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 21:44:06.951688   80157 kubeadm.go:309] 
	I0612 21:44:06.951771   80157 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 21:44:06.951782   80157 kubeadm.go:309] 
	I0612 21:44:06.951857   80157 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 21:44:06.951866   80157 kubeadm.go:309] 
	I0612 21:44:06.951919   80157 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 21:44:06.952007   80157 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 21:44:06.952083   80157 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 21:44:06.952094   80157 kubeadm.go:309] 
	I0612 21:44:06.952160   80157 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 21:44:06.952172   80157 kubeadm.go:309] 
	I0612 21:44:06.952222   80157 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 21:44:06.952232   80157 kubeadm.go:309] 
	I0612 21:44:06.952285   80157 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 21:44:06.952375   80157 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 21:44:06.952460   80157 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 21:44:06.952476   80157 kubeadm.go:309] 
	I0612 21:44:06.952612   80157 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 21:44:06.952711   80157 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 21:44:06.952722   80157 kubeadm.go:309] 
	I0612 21:44:06.952819   80157 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.952933   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a \
	I0612 21:44:06.952963   80157 kubeadm.go:309] 	--control-plane 
	I0612 21:44:06.952985   80157 kubeadm.go:309] 
	I0612 21:44:06.953100   80157 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 21:44:06.953114   80157 kubeadm.go:309] 
	I0612 21:44:06.953219   80157 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token x2knjj.1kuv2wdowwsbztfg \
	I0612 21:44:06.953373   80157 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:47c7bcbdc5206af46b9793ab0454eb6f582cbdc799f21f68d2bb0154158e384a 
	I0612 21:44:06.953943   80157 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:44:06.953986   80157 cni.go:84] Creating CNI manager for ""
	I0612 21:44:06.954003   80157 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 21:44:06.956587   80157 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 21:44:06.957989   80157 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 21:44:06.972666   80157 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 21:44:07.000720   80157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 21:44:07.000822   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.000839   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-087875 minikube.k8s.io/updated_at=2024_06_12T21_44_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8d282d3acc6cc32dfd3bc1bc39edde8a887b9d79 minikube.k8s.io/name=no-preload-087875 minikube.k8s.io/primary=true
	I0612 21:44:07.201613   80157 ops.go:34] apiserver oom_adj: -16
	I0612 21:44:07.201713   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:07.702791   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.201886   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:08.702020   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.202755   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:09.702683   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.202007   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:10.702272   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.201764   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:11.702383   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.201880   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:12.702587   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.202524   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:13.702498   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.202157   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:14.702197   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.201852   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:15.702444   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.201919   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:16.701722   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.202307   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:17.701823   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.202602   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:18.702354   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.202207   80157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 21:44:19.308654   80157 kubeadm.go:1107] duration metric: took 12.307897648s to wait for elevateKubeSystemPrivileges
	W0612 21:44:19.308699   80157 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 21:44:19.308709   80157 kubeadm.go:393] duration metric: took 5m15.118303799s to StartCluster
	I0612 21:44:19.308738   80157 settings.go:142] acquiring lock: {Name:mkf84c2b75038a5495754241340b980300bbb23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.308825   80157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:44:19.311295   80157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/kubeconfig: {Name:mkc7ab966096bc741778697e64eab663acf18b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 21:44:19.311587   80157 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.63 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0612 21:44:19.313263   80157 out.go:177] * Verifying Kubernetes components...
	I0612 21:44:19.311693   80157 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 21:44:19.311780   80157 config.go:182] Loaded profile config "no-preload-087875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:44:19.315137   80157 addons.go:69] Setting storage-provisioner=true in profile "no-preload-087875"
	I0612 21:44:19.315148   80157 addons.go:69] Setting default-storageclass=true in profile "no-preload-087875"
	I0612 21:44:19.315192   80157 addons.go:234] Setting addon storage-provisioner=true in "no-preload-087875"
	I0612 21:44:19.315201   80157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-087875"
	I0612 21:44:19.315202   80157 addons.go:69] Setting metrics-server=true in profile "no-preload-087875"
	I0612 21:44:19.315240   80157 addons.go:234] Setting addon metrics-server=true in "no-preload-087875"
	W0612 21:44:19.315255   80157 addons.go:243] addon metrics-server should already be in state true
	I0612 21:44:19.315296   80157 host.go:66] Checking if "no-preload-087875" exists ...
	W0612 21:44:19.315209   80157 addons.go:243] addon storage-provisioner should already be in state true
	I0612 21:44:19.315397   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.315139   80157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 21:44:19.315636   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315666   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315653   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315698   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.315731   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.315750   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.331461   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40419
	I0612 21:44:19.331495   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0612 21:44:19.331924   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332019   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.332446   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332466   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332580   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.332603   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.332866   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.332911   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.333087   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.333484   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.333508   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.334462   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0612 21:44:19.334922   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.335447   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.335474   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.335812   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.336376   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.336408   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.336657   80157 addons.go:234] Setting addon default-storageclass=true in "no-preload-087875"
	W0612 21:44:19.336675   80157 addons.go:243] addon default-storageclass should already be in state true
	I0612 21:44:19.336701   80157 host.go:66] Checking if "no-preload-087875" exists ...
	I0612 21:44:19.337047   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.337078   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.350724   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0612 21:44:19.351308   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.351869   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.351897   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.352272   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.352503   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.354434   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0612 21:44:19.354532   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.356594   80157 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 21:44:19.354927   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.355284   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37489
	I0612 21:44:19.357181   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.358026   80157 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.357219   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358040   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 21:44:19.358048   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.358058   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.358407   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.358560   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.358577   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.359024   80157 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17779-14199/.minikube/bin/docker-machine-driver-kvm2
	I0612 21:44:19.359035   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.359069   80157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:44:19.359408   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.361013   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.361524   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.363337   80157 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0612 21:44:19.361921   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.362312   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.364713   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 21:44:19.364727   80157 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 21:44:19.364736   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.364744   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.365021   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.365260   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.365419   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.368572   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.368971   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.368988   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.369144   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.369316   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.369431   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.369538   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.377220   80157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0612 21:44:19.377598   80157 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:44:19.378595   80157 main.go:141] libmachine: Using API Version  1
	I0612 21:44:19.378621   80157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:44:19.378931   80157 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:44:19.379127   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetState
	I0612 21:44:19.380646   80157 main.go:141] libmachine: (no-preload-087875) Calling .DriverName
	I0612 21:44:19.380844   80157 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.380857   80157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 21:44:19.380869   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHHostname
	I0612 21:44:19.383763   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384201   80157 main.go:141] libmachine: (no-preload-087875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:a2:aa", ip: ""} in network mk-no-preload-087875: {Iface:virbr4 ExpiryTime:2024-06-12 22:38:35 +0000 UTC Type:0 Mac:52:54:00:6b:a2:aa Iaid: IPaddr:192.168.72.63 Prefix:24 Hostname:no-preload-087875 Clientid:01:52:54:00:6b:a2:aa}
	I0612 21:44:19.384216   80157 main.go:141] libmachine: (no-preload-087875) DBG | domain no-preload-087875 has defined IP address 192.168.72.63 and MAC address 52:54:00:6b:a2:aa in network mk-no-preload-087875
	I0612 21:44:19.384504   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHPort
	I0612 21:44:19.384660   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHKeyPath
	I0612 21:44:19.384816   80157 main.go:141] libmachine: (no-preload-087875) Calling .GetSSHUsername
	I0612 21:44:19.384956   80157 sshutil.go:53] new ssh client: &{IP:192.168.72.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/no-preload-087875/id_rsa Username:docker}
	I0612 21:44:19.516231   80157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 21:44:19.539205   80157 node_ready.go:35] waiting up to 6m0s for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546948   80157 node_ready.go:49] node "no-preload-087875" has status "Ready":"True"
	I0612 21:44:19.546972   80157 node_ready.go:38] duration metric: took 7.739123ms for node "no-preload-087875" to be "Ready" ...
	I0612 21:44:19.546985   80157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:19.553454   80157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562831   80157 pod_ready.go:92] pod "etcd-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.562854   80157 pod_ready.go:81] duration metric: took 9.377758ms for pod "etcd-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.562862   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568274   80157 pod_ready.go:92] pod "kube-apiserver-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.568296   80157 pod_ready.go:81] duration metric: took 5.425162ms for pod "kube-apiserver-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.568306   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.572960   80157 pod_ready.go:92] pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:19.572991   80157 pod_ready.go:81] duration metric: took 4.669828ms for pod "kube-controller-manager-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.573002   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:19.620522   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 21:44:19.620548   80157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0612 21:44:19.654325   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 21:44:19.681762   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 21:44:19.681800   80157 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 21:44:19.699701   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 21:44:19.774496   80157 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:19.774526   80157 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 21:44:19.874891   80157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 21:44:20.590260   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590292   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590276   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590360   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.590587   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.590634   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.590644   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.590651   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.590658   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592402   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592462   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592410   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592411   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.592414   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.592551   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.592476   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.592655   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.592952   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.593069   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.593093   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:20.634339   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:20.634370   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:20.634813   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:20.634864   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:20.634880   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.321337   80157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.446394551s)
	I0612 21:44:21.321389   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.321403   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.321802   80157 main.go:141] libmachine: (no-preload-087875) DBG | Closing plugin on server side
	I0612 21:44:21.321827   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.321968   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322012   80157 main.go:141] libmachine: Making call to close driver server
	I0612 21:44:21.322023   80157 main.go:141] libmachine: (no-preload-087875) Calling .Close
	I0612 21:44:21.322278   80157 main.go:141] libmachine: Successfully made call to close driver server
	I0612 21:44:21.322294   80157 main.go:141] libmachine: Making call to close connection to plugin binary
	I0612 21:44:21.322305   80157 addons.go:475] Verifying addon metrics-server=true in "no-preload-087875"
	I0612 21:44:21.324652   80157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0612 21:44:21.326653   80157 addons.go:510] duration metric: took 2.01495884s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0612 21:44:21.589251   80157 pod_ready.go:92] pod "kube-proxy-lnhzt" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.589290   80157 pod_ready.go:81] duration metric: took 2.016278458s for pod "kube-proxy-lnhzt" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.589305   80157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652083   80157 pod_ready.go:92] pod "kube-scheduler-no-preload-087875" in "kube-system" namespace has status "Ready":"True"
	I0612 21:44:21.652122   80157 pod_ready.go:81] duration metric: took 62.805318ms for pod "kube-scheduler-no-preload-087875" in "kube-system" namespace to be "Ready" ...
	I0612 21:44:21.652136   80157 pod_ready.go:38] duration metric: took 2.105136343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 21:44:21.652156   80157 api_server.go:52] waiting for apiserver process to appear ...
	I0612 21:44:21.652237   80157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:44:21.683110   80157 api_server.go:72] duration metric: took 2.371482611s to wait for apiserver process to appear ...
	I0612 21:44:21.683148   80157 api_server.go:88] waiting for apiserver healthz status ...
	I0612 21:44:21.683187   80157 api_server.go:253] Checking apiserver healthz at https://192.168.72.63:8443/healthz ...
	I0612 21:44:21.704637   80157 api_server.go:279] https://192.168.72.63:8443/healthz returned 200:
	ok
	I0612 21:44:21.714032   80157 api_server.go:141] control plane version: v1.30.1
	I0612 21:44:21.714061   80157 api_server.go:131] duration metric: took 30.904631ms to wait for apiserver health ...
	I0612 21:44:21.714070   80157 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 21:44:21.751484   80157 system_pods.go:59] 9 kube-system pods found
	I0612 21:44:21.751520   80157 system_pods.go:61] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:21.751526   80157 system_pods.go:61] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:21.751532   80157 system_pods.go:61] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:21.751537   80157 system_pods.go:61] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:21.751544   80157 system_pods.go:61] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:21.751548   80157 system_pods.go:61] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:21.751552   80157 system_pods.go:61] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:21.751560   80157 system_pods.go:61] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:21.751568   80157 system_pods.go:61] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:21.751581   80157 system_pods.go:74] duration metric: took 37.503399ms to wait for pod list to return data ...
	I0612 21:44:21.751595   80157 default_sa.go:34] waiting for default service account to be created ...
	I0612 21:44:21.943440   80157 default_sa.go:45] found service account: "default"
	I0612 21:44:21.943465   80157 default_sa.go:55] duration metric: took 191.863221ms for default service account to be created ...
	I0612 21:44:21.943473   80157 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 21:44:22.146922   80157 system_pods.go:86] 9 kube-system pods found
	I0612 21:44:22.146960   80157 system_pods.go:89] "coredns-7db6d8ff4d-hsvvf" [2b6c768b-75e2-4c11-99db-1103367ccc20] Running
	I0612 21:44:22.146969   80157 system_pods.go:89] "coredns-7db6d8ff4d-v75tt" [8b48ba7d-8f66-4c31-ac14-3a38e18fa249] Running
	I0612 21:44:22.146975   80157 system_pods.go:89] "etcd-no-preload-087875" [36cea519-d5ea-41f0-893f-358fe8af4448] Running
	I0612 21:44:22.146982   80157 system_pods.go:89] "kube-apiserver-no-preload-087875" [a09319fb-adef-467d-8482-5adf57328c2b] Running
	I0612 21:44:22.146988   80157 system_pods.go:89] "kube-controller-manager-no-preload-087875" [466fead1-a45a-4b33-8587-dc894fa20073] Running
	I0612 21:44:22.146994   80157 system_pods.go:89] "kube-proxy-lnhzt" [bdf1156c-ba02-4551-aefa-66379b05e066] Running
	I0612 21:44:22.147000   80157 system_pods.go:89] "kube-scheduler-no-preload-087875" [fc8eccee-2e27-4ea0-9e6c-0d5c127cdd4f] Running
	I0612 21:44:22.147012   80157 system_pods.go:89] "metrics-server-569cc877fc-mdmgw" [17725ee6-1d17-4a1b-9c65-f596b9b7725f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0612 21:44:22.147030   80157 system_pods.go:89] "storage-provisioner" [90368fec-12d9-4baf-aef6-233691b5e99d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 21:44:22.147042   80157 system_pods.go:126] duration metric: took 203.562938ms to wait for k8s-apps to be running ...
	I0612 21:44:22.147056   80157 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 21:44:22.147110   80157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:22.167568   80157 system_svc.go:56] duration metric: took 20.500218ms WaitForService to wait for kubelet
	I0612 21:44:22.167606   80157 kubeadm.go:576] duration metric: took 2.855984791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 21:44:22.167627   80157 node_conditions.go:102] verifying NodePressure condition ...
	I0612 21:44:22.343015   80157 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 21:44:22.343039   80157 node_conditions.go:123] node cpu capacity is 2
	I0612 21:44:22.343051   80157 node_conditions.go:105] duration metric: took 175.419211ms to run NodePressure ...
	I0612 21:44:22.343064   80157 start.go:240] waiting for startup goroutines ...
	I0612 21:44:22.343073   80157 start.go:245] waiting for cluster config update ...
	I0612 21:44:22.343085   80157 start.go:254] writing updated cluster config ...
	I0612 21:44:22.343387   80157 ssh_runner.go:195] Run: rm -f paused
	I0612 21:44:22.391092   80157 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 21:44:22.393268   80157 out.go:177] * Done! kubectl is now configured to use "no-preload-087875" cluster and "default" namespace by default
	I0612 21:44:37.700712   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:44:37.700862   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:44:37.702455   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:44:37.702552   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:44:37.702639   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:44:37.702749   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:44:37.702887   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:44:37.702992   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:44:37.704955   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:44:37.705032   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:44:37.705088   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:44:37.705159   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:44:37.705228   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:44:37.705289   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:44:37.705368   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:44:37.705467   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:44:37.705538   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:44:37.705620   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:44:37.705683   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:44:37.705723   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:44:37.705773   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:44:37.705816   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:44:37.705861   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:44:37.705917   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:44:37.705964   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:44:37.706062   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:44:37.706172   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:44:37.706231   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:44:37.706288   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:44:37.707753   80762 out.go:204]   - Booting up control plane ...
	I0612 21:44:37.707857   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:44:37.707931   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:44:37.707994   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:44:37.708064   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:44:37.708197   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:44:37.708251   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:44:37.708344   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708536   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708600   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.708770   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.708864   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709067   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709133   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709340   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709441   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:44:37.709638   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:44:37.709650   80762 kubeadm.go:309] 
	I0612 21:44:37.709683   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:44:37.709721   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:44:37.709728   80762 kubeadm.go:309] 
	I0612 21:44:37.709777   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:44:37.709817   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:44:37.709910   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:44:37.709917   80762 kubeadm.go:309] 
	I0612 21:44:37.710018   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:44:37.710052   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:44:37.710083   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:44:37.710089   80762 kubeadm.go:309] 
	I0612 21:44:37.710184   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:44:37.710259   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:44:37.710265   80762 kubeadm.go:309] 
	I0612 21:44:37.710359   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:44:37.710431   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:44:37.710497   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:44:37.710563   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:44:37.710607   80762 kubeadm.go:309] 
	W0612 21:44:37.710666   80762 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0612 21:44:37.710709   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0612 21:44:38.170461   80762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:44:38.186842   80762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 21:44:38.198380   80762 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 21:44:38.198400   80762 kubeadm.go:156] found existing configuration files:
	
	I0612 21:44:38.198454   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 21:44:38.208876   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 21:44:38.208948   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 21:44:38.219641   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 21:44:38.229622   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 21:44:38.229685   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 21:44:38.240153   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.251342   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 21:44:38.251401   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 21:44:38.262662   80762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 21:44:38.272898   80762 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 21:44:38.272954   80762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 21:44:38.283213   80762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 21:44:38.501637   80762 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 21:46:34.582636   80762 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0612 21:46:34.582745   80762 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0612 21:46:34.584702   80762 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0612 21:46:34.584775   80762 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 21:46:34.584898   80762 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 21:46:34.585029   80762 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 21:46:34.585172   80762 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 21:46:34.585263   80762 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 21:46:34.587030   80762 out.go:204]   - Generating certificates and keys ...
	I0612 21:46:34.587101   80762 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 21:46:34.587160   80762 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 21:46:34.587260   80762 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 21:46:34.587349   80762 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0612 21:46:34.587446   80762 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0612 21:46:34.587521   80762 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0612 21:46:34.587609   80762 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0612 21:46:34.587697   80762 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0612 21:46:34.587803   80762 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 21:46:34.587886   80762 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 21:46:34.588014   80762 kubeadm.go:309] [certs] Using the existing "sa" key
	I0612 21:46:34.588097   80762 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 21:46:34.588177   80762 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 21:46:34.588268   80762 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 21:46:34.588381   80762 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 21:46:34.588447   80762 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 21:46:34.588558   80762 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 21:46:34.588659   80762 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 21:46:34.588719   80762 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 21:46:34.588816   80762 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 21:46:34.590114   80762 out.go:204]   - Booting up control plane ...
	I0612 21:46:34.590226   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 21:46:34.590326   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 21:46:34.590444   80762 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 21:46:34.590527   80762 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 21:46:34.590710   80762 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0612 21:46:34.590778   80762 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0612 21:46:34.590847   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591054   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591149   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591411   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591508   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.591743   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.591846   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592108   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592205   80762 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0612 21:46:34.592395   80762 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0612 21:46:34.592403   80762 kubeadm.go:309] 
	I0612 21:46:34.592436   80762 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0612 21:46:34.592485   80762 kubeadm.go:309] 		timed out waiting for the condition
	I0612 21:46:34.592500   80762 kubeadm.go:309] 
	I0612 21:46:34.592535   80762 kubeadm.go:309] 	This error is likely caused by:
	I0612 21:46:34.592563   80762 kubeadm.go:309] 		- The kubelet is not running
	I0612 21:46:34.592677   80762 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0612 21:46:34.592688   80762 kubeadm.go:309] 
	I0612 21:46:34.592820   80762 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0612 21:46:34.592855   80762 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0612 21:46:34.592883   80762 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0612 21:46:34.592890   80762 kubeadm.go:309] 
	I0612 21:46:34.593007   80762 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0612 21:46:34.593107   80762 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0612 21:46:34.593116   80762 kubeadm.go:309] 
	I0612 21:46:34.593224   80762 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0612 21:46:34.593342   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0612 21:46:34.593426   80762 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0612 21:46:34.593494   80762 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0612 21:46:34.593552   80762 kubeadm.go:393] duration metric: took 8m2.356271864s to StartCluster
	I0612 21:46:34.593558   80762 kubeadm.go:309] 
	I0612 21:46:34.593589   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0612 21:46:34.593639   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0612 21:46:34.643842   80762 cri.go:89] found id: ""
	I0612 21:46:34.643876   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.643887   80762 logs.go:278] No container was found matching "kube-apiserver"
	I0612 21:46:34.643905   80762 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0612 21:46:34.643982   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0612 21:46:34.682878   80762 cri.go:89] found id: ""
	I0612 21:46:34.682899   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.682906   80762 logs.go:278] No container was found matching "etcd"
	I0612 21:46:34.682912   80762 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0612 21:46:34.682961   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0612 21:46:34.721931   80762 cri.go:89] found id: ""
	I0612 21:46:34.721955   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.721964   80762 logs.go:278] No container was found matching "coredns"
	I0612 21:46:34.721969   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0612 21:46:34.722021   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0612 21:46:34.759233   80762 cri.go:89] found id: ""
	I0612 21:46:34.759266   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.759274   80762 logs.go:278] No container was found matching "kube-scheduler"
	I0612 21:46:34.759280   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0612 21:46:34.759333   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0612 21:46:34.800142   80762 cri.go:89] found id: ""
	I0612 21:46:34.800176   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.800186   80762 logs.go:278] No container was found matching "kube-proxy"
	I0612 21:46:34.800194   80762 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0612 21:46:34.800256   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0612 21:46:34.836746   80762 cri.go:89] found id: ""
	I0612 21:46:34.836774   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.836784   80762 logs.go:278] No container was found matching "kube-controller-manager"
	I0612 21:46:34.836791   80762 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0612 21:46:34.836850   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0612 21:46:34.876108   80762 cri.go:89] found id: ""
	I0612 21:46:34.876138   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.876147   80762 logs.go:278] No container was found matching "kindnet"
	I0612 21:46:34.876153   80762 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0612 21:46:34.876202   80762 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0612 21:46:34.912272   80762 cri.go:89] found id: ""
	I0612 21:46:34.912294   80762 logs.go:276] 0 containers: []
	W0612 21:46:34.912301   80762 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0612 21:46:34.912310   80762 logs.go:123] Gathering logs for describe nodes ...
	I0612 21:46:34.912324   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0612 21:46:34.997300   80762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0612 21:46:34.997331   80762 logs.go:123] Gathering logs for CRI-O ...
	I0612 21:46:34.997347   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0612 21:46:35.105602   80762 logs.go:123] Gathering logs for container status ...
	I0612 21:46:35.105638   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 21:46:35.152818   80762 logs.go:123] Gathering logs for kubelet ...
	I0612 21:46:35.152857   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 21:46:35.216504   80762 logs.go:123] Gathering logs for dmesg ...
	I0612 21:46:35.216545   80762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0612 21:46:35.239531   80762 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0612 21:46:35.239581   80762 out.go:239] * 
	W0612 21:46:35.239646   80762 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.239672   80762 out.go:239] * 
	W0612 21:46:35.240600   80762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0612 21:46:35.244822   80762 out.go:177] 
	W0612 21:46:35.246072   80762 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0612 21:46:35.246137   80762 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0612 21:46:35.246164   80762 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0612 21:46:35.247768   80762 out.go:177] 
	
	
	==> CRI-O <==
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.900497851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229456900437987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f85e35f-6d45-41bc-85ed-a82691591ca8 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.901137959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53c64438-e930-48be-bd7d-024c4d99743c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.901190490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53c64438-e930-48be-bd7d-024c4d99743c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.901221419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=53c64438-e930-48be-bd7d-024c4d99743c name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.938331780Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f94a3ef9-b3ad-45a2-8e54-72c9c8fa6b6b name=/runtime.v1.RuntimeService/Version
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.938440469Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f94a3ef9-b3ad-45a2-8e54-72c9c8fa6b6b name=/runtime.v1.RuntimeService/Version
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.939883817Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc0827ba-0ef8-47a2-959d-ac2d3d71c168 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.940564194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229456940479001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc0827ba-0ef8-47a2-959d-ac2d3d71c168 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.941224749Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a98f892-ec51-4d0d-84b8-c9ddfb35b2ab name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.941300057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a98f892-ec51-4d0d-84b8-c9ddfb35b2ab name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.941353814Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2a98f892-ec51-4d0d-84b8-c9ddfb35b2ab name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.976383163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be4271d7-dcac-4e4a-a982-655f00e64bb3 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.976467419Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be4271d7-dcac-4e4a-a982-655f00e64bb3 name=/runtime.v1.RuntimeService/Version
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.977808364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eaa90a40-9dd5-47dd-a06e-0c163492c921 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.978187273Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229456978167625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eaa90a40-9dd5-47dd-a06e-0c163492c921 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.978908060Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68ffa0fe-6104-4947-9d49-36e869dc6cdd name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.978983137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68ffa0fe-6104-4947-9d49-36e869dc6cdd name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:57:36 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:36.979029678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=68ffa0fe-6104-4947-9d49-36e869dc6cdd name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:57:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:37.011209060Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c018ee1-2aac-4fa5-8e04-2946bafa24ce name=/runtime.v1.RuntimeService/Version
	Jun 12 21:57:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:37.011288430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c018ee1-2aac-4fa5-8e04-2946bafa24ce name=/runtime.v1.RuntimeService/Version
	Jun 12 21:57:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:37.013141551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77fbc86c-ff90-4d80-8e42-3cdcf3cbfebc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:57:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:37.013654950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718229457013628623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77fbc86c-ff90-4d80-8e42-3cdcf3cbfebc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 12 21:57:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:37.014503648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8de8f2be-e644-4ca6-a6de-d994da6d5453 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:57:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:37.014628837Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8de8f2be-e644-4ca6-a6de-d994da6d5453 name=/runtime.v1.RuntimeService/ListContainers
	Jun 12 21:57:37 old-k8s-version-983302 crio[651]: time="2024-06-12 21:57:37.014659803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8de8f2be-e644-4ca6-a6de-d994da6d5453 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun12 21:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056321] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044953] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.826136] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.486922] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.757887] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.131253] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.069367] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066150] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.207548] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.141383] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.298797] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +6.786115] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.069711] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.050220] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[ +13.489395] kauditd_printk_skb: 46 callbacks suppressed
	[Jun12 21:42] systemd-fstab-generator[5031]: Ignoring "noauto" option for root device
	[Jun12 21:44] systemd-fstab-generator[5305]: Ignoring "noauto" option for root device
	[  +0.065559] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:57:37 up 19 min,  0 users,  load average: 0.04, 0.03, 0.01
	Linux old-k8s-version-983302 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000d9260, 0xc000960788, 0x70c7020, 0x0, 0x0)
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008e3180)
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1245 +0x7e
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]: goroutine 158 [select]:
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000953540, 0xc000b68f01, 0xc000899a80, 0xc000b54dc0, 0xc000b3ac00, 0xc000b3abc0)
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000b68fc0, 0x0, 0x0)
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008e3180)
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jun 12 21:57:36 old-k8s-version-983302 kubelet[6768]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jun 12 21:57:36 old-k8s-version-983302 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 12 21:57:36 old-k8s-version-983302 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 12 21:57:37 old-k8s-version-983302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 135.
	Jun 12 21:57:37 old-k8s-version-983302 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 12 21:57:37 old-k8s-version-983302 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 12 21:57:37 old-k8s-version-983302 kubelet[6851]: I0612 21:57:37.232079    6851 server.go:416] Version: v1.20.0
	Jun 12 21:57:37 old-k8s-version-983302 kubelet[6851]: I0612 21:57:37.232388    6851 server.go:837] Client rotation is on, will bootstrap in background
	Jun 12 21:57:37 old-k8s-version-983302 kubelet[6851]: I0612 21:57:37.234683    6851 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 12 21:57:37 old-k8s-version-983302 kubelet[6851]: W0612 21:57:37.235430    6851 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 12 21:57:37 old-k8s-version-983302 kubelet[6851]: I0612 21:57:37.235859    6851 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-983302 -n old-k8s-version-983302
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 2 (224.580247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-983302" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (116.37s)

                                                
                                    

Test pass (244/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 53.7
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.30.1/json-events 44.06
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.06
18 TestDownloadOnly/v1.30.1/DeleteAll 0.12
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
22 TestOffline 66.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 149.07
29 TestAddons/parallel/Registry 17.74
31 TestAddons/parallel/InspektorGadget 10.88
33 TestAddons/parallel/HelmTiller 13.59
35 TestAddons/parallel/CSI 104.03
36 TestAddons/parallel/Headlamp 13.32
37 TestAddons/parallel/CloudSpanner 5.56
38 TestAddons/parallel/LocalPath 56.09
39 TestAddons/parallel/NvidiaDevicePlugin 5.6
40 TestAddons/parallel/Yakd 5.01
44 TestAddons/serial/GCPAuth/Namespaces 0.12
46 TestCertOptions 73.7
47 TestCertExpiration 292.02
49 TestForceSystemdFlag 54.07
50 TestForceSystemdEnv 52.71
52 TestKVMDriverInstallOrUpdate 45.22
56 TestErrorSpam/setup 42.08
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.71
59 TestErrorSpam/pause 1.56
60 TestErrorSpam/unpause 1.58
61 TestErrorSpam/stop 5.38
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 56.55
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.88
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.01
73 TestFunctional/serial/CacheCmd/cache/add_local 2.2
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 32.48
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.49
84 TestFunctional/serial/LogsFileCmd 1.55
85 TestFunctional/serial/InvalidService 4.27
87 TestFunctional/parallel/ConfigCmd 0.32
88 TestFunctional/parallel/DashboardCmd 18.89
89 TestFunctional/parallel/DryRun 0.31
90 TestFunctional/parallel/InternationalLanguage 0.16
91 TestFunctional/parallel/StatusCmd 1.26
95 TestFunctional/parallel/ServiceCmdConnect 7.44
96 TestFunctional/parallel/AddonsCmd 0.11
97 TestFunctional/parallel/PersistentVolumeClaim 45.44
99 TestFunctional/parallel/SSHCmd 0.38
100 TestFunctional/parallel/CpCmd 1.43
101 TestFunctional/parallel/MySQL 36.67
102 TestFunctional/parallel/FileSync 0.2
103 TestFunctional/parallel/CertSync 1.59
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
111 TestFunctional/parallel/License 0.64
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
114 TestFunctional/parallel/MountCmd/any-port 11.5
115 TestFunctional/parallel/ProfileCmd/profile_list 0.32
116 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
117 TestFunctional/parallel/Version/short 0.04
118 TestFunctional/parallel/Version/components 0.67
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
122 TestFunctional/parallel/ServiceCmd/List 0.48
123 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
124 TestFunctional/parallel/MountCmd/specific-port 1.61
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
126 TestFunctional/parallel/ServiceCmd/Format 0.3
127 TestFunctional/parallel/ServiceCmd/URL 0.35
137 TestFunctional/parallel/MountCmd/VerifyCleanup 0.66
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.8
143 TestFunctional/parallel/ImageCommands/Setup 2
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.15
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.91
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.04
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.52
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 6.74
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.8
151 TestFunctional/delete_addon-resizer_images 0.06
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 256.73
158 TestMultiControlPlane/serial/DeployApp 6.33
159 TestMultiControlPlane/serial/PingHostFromPods 1.27
160 TestMultiControlPlane/serial/AddWorkerNode 47.4
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
163 TestMultiControlPlane/serial/CopyFile 12.7
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
169 TestMultiControlPlane/serial/DeleteSecondaryNode 18.28
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 342.8
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
174 TestMultiControlPlane/serial/AddSecondaryNode 75.17
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestJSONOutput/start/Command 97.06
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.7
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.6
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.38
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 88.3
211 TestMountStart/serial/StartWithMountFirst 25.63
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 28.58
214 TestMountStart/serial/VerifyMountSecond 0.36
215 TestMountStart/serial/DeleteFirst 0.66
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 20.48
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 99.65
223 TestMultiNode/serial/DeployApp2Nodes 6.43
224 TestMultiNode/serial/PingHostFrom2Pods 0.77
225 TestMultiNode/serial/AddNode 41.1
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.97
229 TestMultiNode/serial/StopNode 2.37
230 TestMultiNode/serial/StartAfterStop 29.18
232 TestMultiNode/serial/DeleteNode 2.24
234 TestMultiNode/serial/RestartMultiNode 172.03
235 TestMultiNode/serial/ValidateNameConflict 42.15
242 TestScheduledStopUnix 115.63
246 TestRunningBinaryUpgrade 246.31
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 97.11
260 TestNetworkPlugins/group/false 2.73
264 TestNoKubernetes/serial/StartWithStopK8s 31.52
265 TestNoKubernetes/serial/Start 50.51
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
267 TestNoKubernetes/serial/ProfileList 2.07
268 TestNoKubernetes/serial/Stop 1.47
269 TestNoKubernetes/serial/StartNoArgs 44.2
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
279 TestPause/serial/Start 94.11
280 TestStoppedBinaryUpgrade/Setup 2.61
281 TestStoppedBinaryUpgrade/Upgrade 121.79
283 TestNetworkPlugins/group/auto/Start 72.89
284 TestNetworkPlugins/group/kindnet/Start 96.11
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
286 TestNetworkPlugins/group/calico/Start 119.52
287 TestNetworkPlugins/group/auto/KubeletFlags 0.24
288 TestNetworkPlugins/group/auto/NetCatPod 11.37
289 TestNetworkPlugins/group/auto/DNS 0.16
290 TestNetworkPlugins/group/auto/Localhost 0.13
291 TestNetworkPlugins/group/auto/HairPin 0.14
292 TestNetworkPlugins/group/custom-flannel/Start 91.82
293 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
295 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
296 TestNetworkPlugins/group/kindnet/DNS 0.16
297 TestNetworkPlugins/group/kindnet/Localhost 0.14
298 TestNetworkPlugins/group/kindnet/HairPin 0.14
299 TestNetworkPlugins/group/flannel/Start 92.39
300 TestNetworkPlugins/group/bridge/Start 123.57
301 TestNetworkPlugins/group/calico/ControllerPod 6.01
302 TestNetworkPlugins/group/calico/KubeletFlags 0.35
303 TestNetworkPlugins/group/calico/NetCatPod 14.5
304 TestNetworkPlugins/group/calico/DNS 0.17
305 TestNetworkPlugins/group/calico/Localhost 0.13
306 TestNetworkPlugins/group/calico/HairPin 0.13
307 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
308 TestNetworkPlugins/group/enable-default-cni/Start 69.02
309 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.76
310 TestNetworkPlugins/group/custom-flannel/DNS 0.2
311 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
312 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
313 TestNetworkPlugins/group/flannel/ControllerPod 6.01
316 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
317 TestNetworkPlugins/group/flannel/NetCatPod 12.25
318 TestNetworkPlugins/group/flannel/DNS 0.25
319 TestNetworkPlugins/group/flannel/Localhost 0.17
320 TestNetworkPlugins/group/flannel/HairPin 0.16
322 TestStartStop/group/no-preload/serial/FirstStart 119.81
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
326 TestNetworkPlugins/group/bridge/NetCatPod 11.28
327 TestNetworkPlugins/group/bridge/DNS 0.17
328 TestNetworkPlugins/group/bridge/Localhost 0.13
329 TestNetworkPlugins/group/bridge/HairPin 0.12
330 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
331 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
332 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
334 TestStartStop/group/embed-certs/serial/FirstStart 100.02
336 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.82
337 TestStartStop/group/no-preload/serial/DeployApp 10.39
338 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
339 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
343 TestStartStop/group/embed-certs/serial/DeployApp 10.27
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
350 TestStartStop/group/no-preload/serial/SecondStart 695.42
351 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 575.5
353 TestStartStop/group/embed-certs/serial/SecondStart 625.52
354 TestStartStop/group/old-k8s-version/serial/Stop 1.39
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
366 TestStartStop/group/newest-cni/serial/FirstStart 56.45
367 TestStartStop/group/newest-cni/serial/DeployApp 0
368 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
369 TestStartStop/group/newest-cni/serial/Stop 7.36
370 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.52
371 TestStartStop/group/newest-cni/serial/SecondStart 36.22
372 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
375 TestStartStop/group/newest-cni/serial/Pause 2.38
x
+
TestDownloadOnly/v1.20.0/json-events (53.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-691398 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-691398 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (53.700952286s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (53.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-691398
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-691398: exit status 85 (56.485235ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-691398 | jenkins | v1.33.1 | 12 Jun 24 20:10 UTC |          |
	|         | -p download-only-691398        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 20:10:48
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 20:10:48.459144   21456 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:10:48.459418   21456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:10:48.459428   21456 out.go:304] Setting ErrFile to fd 2...
	I0612 20:10:48.459435   21456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:10:48.459616   21456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	W0612 20:10:48.459769   21456 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17779-14199/.minikube/config/config.json: open /home/jenkins/minikube-integration/17779-14199/.minikube/config/config.json: no such file or directory
	I0612 20:10:48.460347   21456 out.go:298] Setting JSON to true
	I0612 20:10:48.461181   21456 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3193,"bootTime":1718219855,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 20:10:48.461232   21456 start.go:139] virtualization: kvm guest
	I0612 20:10:48.463719   21456 out.go:97] [download-only-691398] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 20:10:48.465237   21456 out.go:169] MINIKUBE_LOCATION=17779
	W0612 20:10:48.463827   21456 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball: no such file or directory
	I0612 20:10:48.463844   21456 notify.go:220] Checking for updates...
	I0612 20:10:48.467966   21456 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 20:10:48.469292   21456 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:10:48.470664   21456 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:10:48.472113   21456 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0612 20:10:48.474655   21456 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0612 20:10:48.474897   21456 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 20:10:48.574096   21456 out.go:97] Using the kvm2 driver based on user configuration
	I0612 20:10:48.574133   21456 start.go:297] selected driver: kvm2
	I0612 20:10:48.574141   21456 start.go:901] validating driver "kvm2" against <nil>
	I0612 20:10:48.574493   21456 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:10:48.574629   21456 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 20:10:48.589836   21456 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 20:10:48.589882   21456 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 20:10:48.590352   21456 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0612 20:10:48.590505   21456 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0612 20:10:48.590571   21456 cni.go:84] Creating CNI manager for ""
	I0612 20:10:48.590583   21456 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 20:10:48.590590   21456 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 20:10:48.590642   21456 start.go:340] cluster config:
	{Name:download-only-691398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-691398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:10:48.590802   21456 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:10:48.592763   21456 out.go:97] Downloading VM boot image ...
	I0612 20:10:48.592807   21456 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17779-14199/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0612 20:10:59.832935   21456 out.go:97] Starting "download-only-691398" primary control-plane node in "download-only-691398" cluster
	I0612 20:10:59.832956   21456 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 20:10:59.963410   21456 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0612 20:10:59.963440   21456 cache.go:56] Caching tarball of preloaded images
	I0612 20:10:59.963625   21456 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 20:10:59.966025   21456 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0612 20:10:59.966048   21456 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0612 20:11:00.076440   21456 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0612 20:11:15.287786   21456 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0612 20:11:15.287874   21456 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0612 20:11:16.193653   21456 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0612 20:11:16.193994   21456 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/download-only-691398/config.json ...
	I0612 20:11:16.194028   21456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/download-only-691398/config.json: {Name:mk57bc1de0f58d30d396841be771abfca602cded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:11:16.194175   21456 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0612 20:11:16.194320   21456 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-691398 host does not exist
	  To start a cluster, run: "minikube start -p download-only-691398"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-691398
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (44.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-740695 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-740695 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (44.059416762s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (44.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-740695
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-740695: exit status 85 (56.145513ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-691398 | jenkins | v1.33.1 | 12 Jun 24 20:10 UTC |                     |
	|         | -p download-only-691398        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 12 Jun 24 20:11 UTC | 12 Jun 24 20:11 UTC |
	| delete  | -p download-only-691398        | download-only-691398 | jenkins | v1.33.1 | 12 Jun 24 20:11 UTC | 12 Jun 24 20:11 UTC |
	| start   | -o=json --download-only        | download-only-740695 | jenkins | v1.33.1 | 12 Jun 24 20:11 UTC |                     |
	|         | -p download-only-740695        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 20:11:42
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 20:11:42.454850   21793 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:11:42.454959   21793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:11:42.454971   21793 out.go:304] Setting ErrFile to fd 2...
	I0612 20:11:42.454978   21793 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:11:42.455234   21793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:11:42.455843   21793 out.go:298] Setting JSON to true
	I0612 20:11:42.456742   21793 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3247,"bootTime":1718219855,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 20:11:42.456800   21793 start.go:139] virtualization: kvm guest
	I0612 20:11:42.458996   21793 out.go:97] [download-only-740695] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 20:11:42.460405   21793 out.go:169] MINIKUBE_LOCATION=17779
	I0612 20:11:42.459122   21793 notify.go:220] Checking for updates...
	I0612 20:11:42.463726   21793 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 20:11:42.465184   21793 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:11:42.466606   21793 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:11:42.467854   21793 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0612 20:11:42.470391   21793 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0612 20:11:42.470598   21793 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 20:11:42.501733   21793 out.go:97] Using the kvm2 driver based on user configuration
	I0612 20:11:42.501775   21793 start.go:297] selected driver: kvm2
	I0612 20:11:42.501787   21793 start.go:901] validating driver "kvm2" against <nil>
	I0612 20:11:42.502128   21793 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:11:42.502219   21793 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17779-14199/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0612 20:11:42.516585   21793 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0612 20:11:42.516631   21793 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 20:11:42.517102   21793 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0612 20:11:42.517268   21793 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0612 20:11:42.517340   21793 cni.go:84] Creating CNI manager for ""
	I0612 20:11:42.517356   21793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0612 20:11:42.517370   21793 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 20:11:42.517443   21793 start.go:340] cluster config:
	{Name:download-only-740695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-740695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:11:42.517539   21793 iso.go:125] acquiring lock: {Name:mka3f0e4342e40c53a8ce19d62c157a63127ccf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 20:11:42.519354   21793 out.go:97] Starting "download-only-740695" primary control-plane node in "download-only-740695" cluster
	I0612 20:11:42.519376   21793 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:11:43.117463   21793 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 20:11:43.117502   21793 cache.go:56] Caching tarball of preloaded images
	I0612 20:11:43.117702   21793 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:11:43.119858   21793 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0612 20:11:43.119880   21793 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 ...
	I0612 20:11:43.230337   21793 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:a8c8ea593b2bc93a46ce7b040a44f86d -> /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0612 20:11:53.948576   21793 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 ...
	I0612 20:11:53.948691   21793 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17779-14199/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 ...
	I0612 20:11:54.699147   21793 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0612 20:11:54.699523   21793 profile.go:143] Saving config to /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/download-only-740695/config.json ...
	I0612 20:11:54.699554   21793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/download-only-740695/config.json: {Name:mka43c2949e503193afaf48d2f331bbacd83f330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 20:11:54.699727   21793 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0612 20:11:54.699901   21793 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17779-14199/.minikube/cache/linux/amd64/v1.30.1/kubectl
	
	
	* The control-plane node download-only-740695 host does not exist
	  To start a cluster, run: "minikube start -p download-only-740695"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-740695
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-323011 --alsologtostderr --binary-mirror http://127.0.0.1:40201 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-323011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-323011
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (66.61s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-695567 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-695567 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m5.61235772s)
helpers_test.go:175: Cleaning up "offline-crio-695567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-695567
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-695567: (1.000678051s)
--- PASS: TestOffline (66.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-899843
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-899843: exit status 85 (57.632129ms)

                                                
                                                
-- stdout --
	* Profile "addons-899843" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-899843"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-899843
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-899843: exit status 85 (56.70486ms)

                                                
                                                
-- stdout --
	* Profile "addons-899843" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-899843"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (149.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-899843 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-899843 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m29.073500196s)
--- PASS: TestAddons/Setup (149.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 17.180877ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-d4wfp" [4dedad66-548d-4156-a741-4077e86eb02b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011034162s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l4fcl" [947cca02-a2df-4d5e-b84a-0cb7bb05d876] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.009498033s
addons_test.go:342: (dbg) Run:  kubectl --context addons-899843 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-899843 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-899843 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.948066462s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 ip
2024/06/12 20:15:13 [DEBUG] GET http://192.168.39.248:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ljwsf" [559e269f-5bc9-4589-8bd6-8e6741f26381] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.026192557s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-899843
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-899843: (5.855234981s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.59s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 15.067818ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-wrb4j" [d5a32aea-e711-4681-8246-f238b7566914] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.016422944s
addons_test.go:475: (dbg) Run:  kubectl --context addons-899843 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-899843 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.939380293s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.59s)

                                                
                                    
x
+
TestAddons/parallel/CSI (104.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 25.477313ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-899843 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-899843 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [305d78db-6cd2-45ed-9d5d-cc33a0141a28] Pending
helpers_test.go:344: "task-pv-pod" [305d78db-6cd2-45ed-9d5d-cc33a0141a28] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [305d78db-6cd2-45ed-9d5d-cc33a0141a28] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004220323s
addons_test.go:586: (dbg) Run:  kubectl --context addons-899843 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-899843 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-899843 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-899843 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-899843 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-899843 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-899843 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [064dda79-42dc-4b85-8df5-dc765976b4fb] Pending
helpers_test.go:344: "task-pv-pod-restore" [064dda79-42dc-4b85-8df5-dc765976b4fb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [064dda79-42dc-4b85-8df5-dc765976b4fb] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.203201178s
addons_test.go:628: (dbg) Run:  kubectl --context addons-899843 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-899843 delete pod task-pv-pod-restore: (1.200682593s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-899843 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-899843 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-899843 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.812826599s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (104.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-899843 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-899843 --alsologtostderr -v=1: (1.31130023s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7fc69f7444-2hfkx" [c88103f0-de17-4f17-a1dd-fa97f936c891] Pending
helpers_test.go:344: "headlamp-7fc69f7444-2hfkx" [c88103f0-de17-4f17-a1dd-fa97f936c891] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7fc69f7444-2hfkx" [c88103f0-de17-4f17-a1dd-fa97f936c891] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004385192s
--- PASS: TestAddons/parallel/Headlamp (13.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-nhgxl" [35c199b2-2447-4b6b-9ea5-2d6808246eb1] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004713565s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-899843
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-899843 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-899843 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-899843 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fa3b9a87-1b70-46ce-9bb7-2eff07e1b1e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fa3b9a87-1b70-46ce-9bb7-2eff07e1b1e0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fa3b9a87-1b70-46ce-9bb7-2eff07e1b1e0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003852612s
addons_test.go:992: (dbg) Run:  kubectl --context addons-899843 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 ssh "cat /opt/local-path-provisioner/pvc-0b5a2113-5bb0-41c3-b569-15c053bb7f98_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-899843 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-899843 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-899843 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-899843 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.220614798s)
--- PASS: TestAddons/parallel/LocalPath (56.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7t2hk" [318904a0-3329-4548-9694-082dce3d63ff] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00562123s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-899843
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-mwtps" [a01c7a18-474f-45e3-906d-4e7b54800ba0] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005439001s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-899843 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-899843 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (73.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-449240 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-449240 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m12.524873083s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-449240 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-449240 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-449240 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-449240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-449240
--- PASS: TestCertOptions (73.70s)

                                                
                                    
x
+
TestCertExpiration (292.02s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-112791 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-112791 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (44.239072211s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-112791 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0612 21:24:56.704839   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-112791 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m6.985769517s)
helpers_test.go:175: Cleaning up "cert-expiration-112791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-112791
--- PASS: TestCertExpiration (292.02s)

                                                
                                    
x
+
TestForceSystemdFlag (54.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-732641 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-732641 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.821049633s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-732641 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-732641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-732641
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-732641: (1.03100426s)
--- PASS: TestForceSystemdFlag (54.07s)

                                                
                                    
x
+
TestForceSystemdEnv (52.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-436071 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-436071 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (51.888884857s)
helpers_test.go:175: Cleaning up "force-systemd-env-436071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-436071
--- PASS: TestForceSystemdEnv (52.71s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (45.22s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (45.22s)

                                                
                                    
x
+
TestErrorSpam/setup (42.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-047966 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-047966 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-047966 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-047966 --driver=kvm2  --container-runtime=crio: (42.079404845s)
--- PASS: TestErrorSpam/setup (42.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (5.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 stop: (2.281042194s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 stop: (1.900643135s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-047966 --log_dir /tmp/nospam-047966 stop: (1.195081192s)
--- PASS: TestErrorSpam/stop (5.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17779-14199/.minikube/files/etc/test/nested/copy/21444/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944676 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0612 20:24:56.704588   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:24:56.710359   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:24:56.720592   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:24:56.740837   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:24:56.781184   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:24:56.861612   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:24:57.022052   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:24:57.342655   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:24:57.983543   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:24:59.264056   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:25:01.824318   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:25:06.944956   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:25:17.186097   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-944676 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.548075607s)
--- PASS: TestFunctional/serial/StartWithProxy (56.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.88s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944676 --alsologtostderr -v=8
E0612 20:25:37.666659   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-944676 --alsologtostderr -v=8: (38.881537633s)
functional_test.go:659: soft start took 38.882102651s for "functional-944676" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.88s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-944676 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 cache add registry.k8s.io/pause:3.3: (1.08758543s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-944676 /tmp/TestFunctionalserialCacheCmdcacheadd_local3102758622/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 cache add minikube-local-cache-test:functional-944676
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 cache add minikube-local-cache-test:functional-944676: (1.880561102s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 cache delete minikube-local-cache-test:functional-944676
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-944676
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944676 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (212.722296ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 kubectl -- --context functional-944676 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-944676 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944676 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0612 20:26:18.627763   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-944676 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.480814863s)
functional_test.go:757: restart took 32.480910557s for "functional-944676" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-944676 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 logs: (1.492086497s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 logs --file /tmp/TestFunctionalserialLogsFileCmd3080833097/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 logs --file /tmp/TestFunctionalserialLogsFileCmd3080833097/001/logs.txt: (1.550563443s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-944676 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-944676
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-944676: exit status 115 (271.33414ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.53:32277 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-944676 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944676 config get cpus: exit status 14 (62.510986ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944676 config get cpus: exit status 14 (44.434446ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-944676 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-944676 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 30799: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944676 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-944676 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.228158ms)

                                                
                                                
-- stdout --
	* [functional-944676] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:26:50.054311   30392 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:26:50.055028   30392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:26:50.055037   30392 out.go:304] Setting ErrFile to fd 2...
	I0612 20:26:50.055042   30392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:26:50.055262   30392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:26:50.055782   30392 out.go:298] Setting JSON to false
	I0612 20:26:50.056772   30392 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4155,"bootTime":1718219855,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 20:26:50.056832   30392 start.go:139] virtualization: kvm guest
	I0612 20:26:50.058560   30392 out.go:177] * [functional-944676] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 20:26:50.059843   30392 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 20:26:50.059918   30392 notify.go:220] Checking for updates...
	I0612 20:26:50.061203   30392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 20:26:50.063096   30392 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:26:50.064755   30392 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:26:50.066208   30392 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 20:26:50.067608   30392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 20:26:50.069585   30392 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:26:50.070183   30392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:26:50.070234   30392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:26:50.086131   30392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37397
	I0612 20:26:50.086585   30392 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:26:50.087192   30392 main.go:141] libmachine: Using API Version  1
	I0612 20:26:50.087215   30392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:26:50.087624   30392 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:26:50.087866   30392 main.go:141] libmachine: (functional-944676) Calling .DriverName
	I0612 20:26:50.088130   30392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 20:26:50.088561   30392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:26:50.088610   30392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:26:50.105226   30392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43619
	I0612 20:26:50.105698   30392 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:26:50.106226   30392 main.go:141] libmachine: Using API Version  1
	I0612 20:26:50.106251   30392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:26:50.106593   30392 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:26:50.106748   30392 main.go:141] libmachine: (functional-944676) Calling .DriverName
	I0612 20:26:50.153771   30392 out.go:177] * Using the kvm2 driver based on existing profile
	I0612 20:26:50.155202   30392 start.go:297] selected driver: kvm2
	I0612 20:26:50.155220   30392 start.go:901] validating driver "kvm2" against &{Name:functional-944676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-944676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:26:50.155361   30392 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 20:26:50.157877   30392 out.go:177] 
	W0612 20:26:50.159292   30392 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0612 20:26:50.160735   30392 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944676 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944676 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-944676 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (158.722631ms)

                                                
                                                
-- stdout --
	* [functional-944676] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 20:26:49.908385   30347 out.go:291] Setting OutFile to fd 1 ...
	I0612 20:26:49.908549   30347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:26:49.908562   30347 out.go:304] Setting ErrFile to fd 2...
	I0612 20:26:49.908569   30347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 20:26:49.908927   30347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 20:26:49.909445   30347 out.go:298] Setting JSON to false
	I0612 20:26:49.910491   30347 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4155,"bootTime":1718219855,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 20:26:49.910569   30347 start.go:139] virtualization: kvm guest
	I0612 20:26:49.912504   30347 out.go:177] * [functional-944676] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0612 20:26:49.914365   30347 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 20:26:49.914369   30347 notify.go:220] Checking for updates...
	I0612 20:26:49.916239   30347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 20:26:49.918115   30347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 20:26:49.919525   30347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 20:26:49.920534   30347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 20:26:49.921899   30347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 20:26:49.923909   30347 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 20:26:49.924540   30347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:26:49.924628   30347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:26:49.941093   30347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35047
	I0612 20:26:49.941724   30347 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:26:49.942359   30347 main.go:141] libmachine: Using API Version  1
	I0612 20:26:49.942382   30347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:26:49.942780   30347 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:26:49.942973   30347 main.go:141] libmachine: (functional-944676) Calling .DriverName
	I0612 20:26:49.943345   30347 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 20:26:49.943755   30347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 20:26:49.943792   30347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 20:26:49.961684   30347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37697
	I0612 20:26:49.962051   30347 main.go:141] libmachine: () Calling .GetVersion
	I0612 20:26:49.962523   30347 main.go:141] libmachine: Using API Version  1
	I0612 20:26:49.962542   30347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 20:26:49.962883   30347 main.go:141] libmachine: () Calling .GetMachineName
	I0612 20:26:49.963054   30347 main.go:141] libmachine: (functional-944676) Calling .DriverName
	I0612 20:26:49.997503   30347 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0612 20:26:49.998992   30347 start.go:297] selected driver: kvm2
	I0612 20:26:49.999007   30347 start.go:901] validating driver "kvm2" against &{Name:functional-944676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-944676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 20:26:49.999150   30347 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 20:26:50.001729   30347 out.go:177] 
	W0612 20:26:50.003113   30347 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0612 20:26:50.004542   30347 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-944676 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-944676 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-p8grk" [ab3be99b-da8f-4d6b-b43e-dcd53e0970d1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-p8grk" [ab3be99b-da8f-4d6b-b43e-dcd53e0970d1] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004612817s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.53:30302
functional_test.go:1671: http://192.168.39.53:30302: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-p8grk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.53:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.53:30302
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3914bcd6-ea0a-42db-8fb8-b1a057f58ef6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005692937s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-944676 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-944676 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-944676 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-944676 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-944676 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3a0d9f8e-c35b-430d-ac13-77ea8fd12cb5] Pending
helpers_test.go:344: "sp-pod" [3a0d9f8e-c35b-430d-ac13-77ea8fd12cb5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3a0d9f8e-c35b-430d-ac13-77ea8fd12cb5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.00603112s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-944676 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-944676 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-944676 delete -f testdata/storage-provisioner/pod.yaml: (1.933715673s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-944676 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2d0e3d4e-ef05-4a38-a31e-897d74dc599d] Pending
helpers_test.go:344: "sp-pod" [2d0e3d4e-ef05-4a38-a31e-897d74dc599d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2d0e3d4e-ef05-4a38-a31e-897d74dc599d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004417524s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-944676 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh -n functional-944676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 cp functional-944676:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2552423703/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh -n functional-944676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh -n functional-944676 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (36.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-944676 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-qq4h8" [0b04282c-d2ef-4fb1-a6d1-4c0b49ae6b26] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-qq4h8" [0b04282c-d2ef-4fb1-a6d1-4c0b49ae6b26] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 35.004147451s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-944676 exec mysql-64454c8b5c-qq4h8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-944676 exec mysql-64454c8b5c-qq4h8 -- mysql -ppassword -e "show databases;": exit status 1 (132.009703ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-944676 exec mysql-64454c8b5c-qq4h8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (36.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/21444/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "sudo cat /etc/test/nested/copy/21444/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/21444.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "sudo cat /etc/ssl/certs/21444.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/21444.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "sudo cat /usr/share/ca-certificates/21444.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/214442.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "sudo cat /etc/ssl/certs/214442.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/214442.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "sudo cat /usr/share/ca-certificates/214442.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-944676 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944676 ssh "sudo systemctl is-active docker": exit status 1 (254.811958ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944676 ssh "sudo systemctl is-active containerd": exit status 1 (253.955621ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-944676 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-944676 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-rt4t8" [f1648e7d-633b-45d1-851b-4ea94a3412dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-rt4t8" [f1648e7d-633b-45d1-851b-4ea94a3412dc] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004184987s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944676 /tmp/TestFunctionalparallelMountCmdany-port2258386424/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1718224009058189306" to /tmp/TestFunctionalparallelMountCmdany-port2258386424/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1718224009058189306" to /tmp/TestFunctionalparallelMountCmdany-port2258386424/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1718224009058189306" to /tmp/TestFunctionalparallelMountCmdany-port2258386424/001/test-1718224009058189306
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944676 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (226.870133ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 12 20:26 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 12 20:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 12 20:26 test-1718224009058189306
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh cat /mount-9p/test-1718224009058189306
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-944676 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [98c81cf3-0493-4d26-887c-0d31cf195fb4] Pending
helpers_test.go:344: "busybox-mount" [98c81cf3-0493-4d26-887c-0d31cf195fb4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [98c81cf3-0493-4d26-887c-0d31cf195fb4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [98c81cf3-0493-4d26-887c-0d31cf195fb4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003711931s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-944676 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944676 /tmp/TestFunctionalparallelMountCmdany-port2258386424/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "272.528049ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "46.783855ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "262.552906ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "55.187429ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 service list -o json
functional_test.go:1490: Took "488.487193ms" to run "out/minikube-linux-amd64 -p functional-944676 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944676 /tmp/TestFunctionalparallelMountCmdspecific-port1166899936/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944676 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.576983ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944676 /tmp/TestFunctionalparallelMountCmdspecific-port1166899936/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944676 ssh "sudo umount -f /mount-9p": exit status 1 (191.75974ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-944676 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944676 /tmp/TestFunctionalparallelMountCmdspecific-port1166899936/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.53:31562
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.53:31562
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1959549401/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1959549401/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1959549401/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-944676 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1959549401/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1959549401/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1959549401/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944676 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-944676
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-944676
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944676 image ls --format short --alsologtostderr:
I0612 20:27:32.170621   32291 out.go:291] Setting OutFile to fd 1 ...
I0612 20:27:32.170858   32291 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 20:27:32.170868   32291 out.go:304] Setting ErrFile to fd 2...
I0612 20:27:32.170874   32291 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 20:27:32.171044   32291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
I0612 20:27:32.171580   32291 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0612 20:27:32.171692   32291 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0612 20:27:32.172069   32291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0612 20:27:32.172132   32291 main.go:141] libmachine: Launching plugin server for driver kvm2
I0612 20:27:32.186687   32291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
I0612 20:27:32.187250   32291 main.go:141] libmachine: () Calling .GetVersion
I0612 20:27:32.187796   32291 main.go:141] libmachine: Using API Version  1
I0612 20:27:32.187824   32291 main.go:141] libmachine: () Calling .SetConfigRaw
I0612 20:27:32.188142   32291 main.go:141] libmachine: () Calling .GetMachineName
I0612 20:27:32.188355   32291 main.go:141] libmachine: (functional-944676) Calling .GetState
I0612 20:27:32.190213   32291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0612 20:27:32.190258   32291 main.go:141] libmachine: Launching plugin server for driver kvm2
I0612 20:27:32.204038   32291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38153
I0612 20:27:32.204449   32291 main.go:141] libmachine: () Calling .GetVersion
I0612 20:27:32.205000   32291 main.go:141] libmachine: Using API Version  1
I0612 20:27:32.205033   32291 main.go:141] libmachine: () Calling .SetConfigRaw
I0612 20:27:32.205325   32291 main.go:141] libmachine: () Calling .GetMachineName
I0612 20:27:32.205530   32291 main.go:141] libmachine: (functional-944676) Calling .DriverName
I0612 20:27:32.205740   32291 ssh_runner.go:195] Run: systemctl --version
I0612 20:27:32.205772   32291 main.go:141] libmachine: (functional-944676) Calling .GetSSHHostname
I0612 20:27:32.208831   32291 main.go:141] libmachine: (functional-944676) DBG | domain functional-944676 has defined MAC address 52:54:00:89:75:3d in network mk-functional-944676
I0612 20:27:32.209258   32291 main.go:141] libmachine: (functional-944676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:75:3d", ip: ""} in network mk-functional-944676: {Iface:virbr1 ExpiryTime:2024-06-12 21:24:39 +0000 UTC Type:0 Mac:52:54:00:89:75:3d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-944676 Clientid:01:52:54:00:89:75:3d}
I0612 20:27:32.209295   32291 main.go:141] libmachine: (functional-944676) DBG | domain functional-944676 has defined IP address 192.168.39.53 and MAC address 52:54:00:89:75:3d in network mk-functional-944676
I0612 20:27:32.209418   32291 main.go:141] libmachine: (functional-944676) Calling .GetSSHPort
I0612 20:27:32.209576   32291 main.go:141] libmachine: (functional-944676) Calling .GetSSHKeyPath
I0612 20:27:32.209747   32291 main.go:141] libmachine: (functional-944676) Calling .GetSSHUsername
I0612 20:27:32.209996   32291 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/functional-944676/id_rsa Username:docker}
I0612 20:27:32.304247   32291 ssh_runner.go:195] Run: sudo crictl images --output json
I0612 20:27:32.353729   32291 main.go:141] libmachine: Making call to close driver server
I0612 20:27:32.353744   32291 main.go:141] libmachine: (functional-944676) Calling .Close
I0612 20:27:32.353999   32291 main.go:141] libmachine: (functional-944676) DBG | Closing plugin on server side
I0612 20:27:32.354041   32291 main.go:141] libmachine: Successfully made call to close driver server
I0612 20:27:32.354063   32291 main.go:141] libmachine: Making call to close connection to plugin binary
I0612 20:27:32.354078   32291 main.go:141] libmachine: Making call to close driver server
I0612 20:27:32.354090   32291 main.go:141] libmachine: (functional-944676) Calling .Close
I0612 20:27:32.354290   32291 main.go:141] libmachine: Successfully made call to close driver server
I0612 20:27:32.354306   32291 main.go:141] libmachine: Making call to close connection to plugin binary
I0612 20:27:32.354360   32291 main.go:141] libmachine: (functional-944676) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944676 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/google-containers/addon-resizer  | functional-944676  | ffd4cfbbe753e | 34.1MB |
| localhost/minikube-local-cache-test     | functional-944676  | 616d92e92de25 | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 25a1387cdab82 | 112MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 91be940803172 | 118MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | 4f67c83422ec7 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.30.1            | a52dc94f0a912 | 63MB   |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.1            | 747097150317f | 85.9MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944676 image ls --format table --alsologtostderr:
I0612 20:27:32.651503   32420 out.go:291] Setting OutFile to fd 1 ...
I0612 20:27:32.651767   32420 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 20:27:32.651777   32420 out.go:304] Setting ErrFile to fd 2...
I0612 20:27:32.651783   32420 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 20:27:32.652071   32420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
I0612 20:27:32.652846   32420 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0612 20:27:32.652991   32420 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0612 20:27:32.653548   32420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0612 20:27:32.653605   32420 main.go:141] libmachine: Launching plugin server for driver kvm2
I0612 20:27:32.668908   32420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42547
I0612 20:27:32.669281   32420 main.go:141] libmachine: () Calling .GetVersion
I0612 20:27:32.669867   32420 main.go:141] libmachine: Using API Version  1
I0612 20:27:32.669895   32420 main.go:141] libmachine: () Calling .SetConfigRaw
I0612 20:27:32.670256   32420 main.go:141] libmachine: () Calling .GetMachineName
I0612 20:27:32.670490   32420 main.go:141] libmachine: (functional-944676) Calling .GetState
I0612 20:27:32.672243   32420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0612 20:27:32.672277   32420 main.go:141] libmachine: Launching plugin server for driver kvm2
I0612 20:27:32.686188   32420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38735
I0612 20:27:32.686593   32420 main.go:141] libmachine: () Calling .GetVersion
I0612 20:27:32.687046   32420 main.go:141] libmachine: Using API Version  1
I0612 20:27:32.687070   32420 main.go:141] libmachine: () Calling .SetConfigRaw
I0612 20:27:32.687379   32420 main.go:141] libmachine: () Calling .GetMachineName
I0612 20:27:32.687555   32420 main.go:141] libmachine: (functional-944676) Calling .DriverName
I0612 20:27:32.687759   32420 ssh_runner.go:195] Run: systemctl --version
I0612 20:27:32.687781   32420 main.go:141] libmachine: (functional-944676) Calling .GetSSHHostname
I0612 20:27:32.690048   32420 main.go:141] libmachine: (functional-944676) DBG | domain functional-944676 has defined MAC address 52:54:00:89:75:3d in network mk-functional-944676
I0612 20:27:32.690392   32420 main.go:141] libmachine: (functional-944676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:75:3d", ip: ""} in network mk-functional-944676: {Iface:virbr1 ExpiryTime:2024-06-12 21:24:39 +0000 UTC Type:0 Mac:52:54:00:89:75:3d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-944676 Clientid:01:52:54:00:89:75:3d}
I0612 20:27:32.690417   32420 main.go:141] libmachine: (functional-944676) DBG | domain functional-944676 has defined IP address 192.168.39.53 and MAC address 52:54:00:89:75:3d in network mk-functional-944676
I0612 20:27:32.690592   32420 main.go:141] libmachine: (functional-944676) Calling .GetSSHPort
I0612 20:27:32.690763   32420 main.go:141] libmachine: (functional-944676) Calling .GetSSHKeyPath
I0612 20:27:32.690885   32420 main.go:141] libmachine: (functional-944676) Calling .GetSSHUsername
I0612 20:27:32.691009   32420 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/functional-944676/id_rsa Username:docker}
I0612 20:27:32.778015   32420 ssh_runner.go:195] Run: sudo crictl images --output json
I0612 20:27:32.835077   32420 main.go:141] libmachine: Making call to close driver server
I0612 20:27:32.835095   32420 main.go:141] libmachine: (functional-944676) Calling .Close
I0612 20:27:32.835363   32420 main.go:141] libmachine: Successfully made call to close driver server
I0612 20:27:32.835380   32420 main.go:141] libmachine: Making call to close connection to plugin binary
I0612 20:27:32.835388   32420 main.go:141] libmachine: Making call to close driver server
I0612 20:27:32.835396   32420 main.go:141] libmachine: (functional-944676) Calling .Close
I0612 20:27:32.835653   32420 main.go:141] libmachine: (functional-944676) DBG | Closing plugin on server side
I0612 20:27:32.835689   32420 main.go:141] libmachine: Successfully made call to close driver server
I0612 20:27:32.835723   32420 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944676 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100","repoDigests":["docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d","docker.io/library/nginx@sha256:1445eb9c6dc5e9619346c836ef6fbd6a95092e4663f27dcfce116f051cdbd232"],"repoTags":["docker.io/library/nginx:latest"],"size":"191814165"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-944676"],"size":"34114467"},{"id":"616d92e92de25549
e728714775213896cf77f6bcea4ddbee25c5b050d4187f1a","repoDigests":["localhost/minikube-local-cache-test@sha256:14dd989f555b38832b5b019c463ebf9ce3fa869a5d8b3876f3749062c490f5be"],"repoTags":["localhost/minikube-local-cache-test:functional-944676"],"size":"3330"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"25a1387cdab82166df8
29c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"112170310"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":["registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"85933465"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cd
f8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s
.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags
":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:a9cf4f
4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117601759"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"63026504"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944676 image ls --format json --alsologtostderr:
I0612 20:27:32.424738   32346 out.go:291] Setting OutFile to fd 1 ...
I0612 20:27:32.424838   32346 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 20:27:32.424850   32346 out.go:304] Setting ErrFile to fd 2...
I0612 20:27:32.424856   32346 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 20:27:32.425100   32346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
I0612 20:27:32.425785   32346 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0612 20:27:32.425926   32346 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0612 20:27:32.426358   32346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0612 20:27:32.426402   32346 main.go:141] libmachine: Launching plugin server for driver kvm2
I0612 20:27:32.440895   32346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
I0612 20:27:32.441364   32346 main.go:141] libmachine: () Calling .GetVersion
I0612 20:27:32.441950   32346 main.go:141] libmachine: Using API Version  1
I0612 20:27:32.441985   32346 main.go:141] libmachine: () Calling .SetConfigRaw
I0612 20:27:32.442329   32346 main.go:141] libmachine: () Calling .GetMachineName
I0612 20:27:32.442556   32346 main.go:141] libmachine: (functional-944676) Calling .GetState
I0612 20:27:32.444624   32346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0612 20:27:32.444657   32346 main.go:141] libmachine: Launching plugin server for driver kvm2
I0612 20:27:32.458015   32346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
I0612 20:27:32.458329   32346 main.go:141] libmachine: () Calling .GetVersion
I0612 20:27:32.458804   32346 main.go:141] libmachine: Using API Version  1
I0612 20:27:32.458834   32346 main.go:141] libmachine: () Calling .SetConfigRaw
I0612 20:27:32.459107   32346 main.go:141] libmachine: () Calling .GetMachineName
I0612 20:27:32.459281   32346 main.go:141] libmachine: (functional-944676) Calling .DriverName
I0612 20:27:32.459439   32346 ssh_runner.go:195] Run: systemctl --version
I0612 20:27:32.459456   32346 main.go:141] libmachine: (functional-944676) Calling .GetSSHHostname
I0612 20:27:32.462488   32346 main.go:141] libmachine: (functional-944676) DBG | domain functional-944676 has defined MAC address 52:54:00:89:75:3d in network mk-functional-944676
I0612 20:27:32.463066   32346 main.go:141] libmachine: (functional-944676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:75:3d", ip: ""} in network mk-functional-944676: {Iface:virbr1 ExpiryTime:2024-06-12 21:24:39 +0000 UTC Type:0 Mac:52:54:00:89:75:3d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-944676 Clientid:01:52:54:00:89:75:3d}
I0612 20:27:32.463103   32346 main.go:141] libmachine: (functional-944676) DBG | domain functional-944676 has defined IP address 192.168.39.53 and MAC address 52:54:00:89:75:3d in network mk-functional-944676
I0612 20:27:32.463108   32346 main.go:141] libmachine: (functional-944676) Calling .GetSSHPort
I0612 20:27:32.463254   32346 main.go:141] libmachine: (functional-944676) Calling .GetSSHKeyPath
I0612 20:27:32.463375   32346 main.go:141] libmachine: (functional-944676) Calling .GetSSHUsername
I0612 20:27:32.463471   32346 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/functional-944676/id_rsa Username:docker}
I0612 20:27:32.552233   32346 ssh_runner.go:195] Run: sudo crictl images --output json
I0612 20:27:32.601348   32346 main.go:141] libmachine: Making call to close driver server
I0612 20:27:32.601368   32346 main.go:141] libmachine: (functional-944676) Calling .Close
I0612 20:27:32.601645   32346 main.go:141] libmachine: Successfully made call to close driver server
I0612 20:27:32.601660   32346 main.go:141] libmachine: (functional-944676) DBG | Closing plugin on server side
I0612 20:27:32.601663   32346 main.go:141] libmachine: Making call to close connection to plugin binary
I0612 20:27:32.601696   32346 main.go:141] libmachine: Making call to close driver server
I0612 20:27:32.601702   32346 main.go:141] libmachine: (functional-944676) Calling .Close
I0612 20:27:32.601920   32346 main.go:141] libmachine: Successfully made call to close driver server
I0612 20:27:32.601934   32346 main.go:141] libmachine: Making call to close connection to plugin binary
I0612 20:27:32.601945   32346 main.go:141] libmachine: (functional-944676) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944676 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-944676
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 616d92e92de25549e728714775213896cf77f6bcea4ddbee25c5b050d4187f1a
repoDigests:
- localhost/minikube-local-cache-test@sha256:14dd989f555b38832b5b019c463ebf9ce3fa869a5d8b3876f3749062c490f5be
repoTags:
- localhost/minikube-local-cache-test:functional-944676
size: "3330"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "112170310"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "63026504"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117601759"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100
repoDigests:
- docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d
- docker.io/library/nginx@sha256:1445eb9c6dc5e9619346c836ef6fbd6a95092e4663f27dcfce116f051cdbd232
repoTags:
- docker.io/library/nginx:latest
size: "191814165"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "85933465"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944676 image ls --format yaml --alsologtostderr:
I0612 20:27:32.168322   32292 out.go:291] Setting OutFile to fd 1 ...
I0612 20:27:32.168586   32292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 20:27:32.168596   32292 out.go:304] Setting ErrFile to fd 2...
I0612 20:27:32.168601   32292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 20:27:32.168764   32292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
I0612 20:27:32.169400   32292 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0612 20:27:32.169544   32292 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0612 20:27:32.170032   32292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0612 20:27:32.170087   32292 main.go:141] libmachine: Launching plugin server for driver kvm2
I0612 20:27:32.184467   32292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45931
I0612 20:27:32.184907   32292 main.go:141] libmachine: () Calling .GetVersion
I0612 20:27:32.185772   32292 main.go:141] libmachine: Using API Version  1
I0612 20:27:32.185857   32292 main.go:141] libmachine: () Calling .SetConfigRaw
I0612 20:27:32.187044   32292 main.go:141] libmachine: () Calling .GetMachineName
I0612 20:27:32.187312   32292 main.go:141] libmachine: (functional-944676) Calling .GetState
I0612 20:27:32.189556   32292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0612 20:27:32.189596   32292 main.go:141] libmachine: Launching plugin server for driver kvm2
I0612 20:27:32.204362   32292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36345
I0612 20:27:32.204791   32292 main.go:141] libmachine: () Calling .GetVersion
I0612 20:27:32.205243   32292 main.go:141] libmachine: Using API Version  1
I0612 20:27:32.205264   32292 main.go:141] libmachine: () Calling .SetConfigRaw
I0612 20:27:32.205584   32292 main.go:141] libmachine: () Calling .GetMachineName
I0612 20:27:32.205741   32292 main.go:141] libmachine: (functional-944676) Calling .DriverName
I0612 20:27:32.205946   32292 ssh_runner.go:195] Run: systemctl --version
I0612 20:27:32.205964   32292 main.go:141] libmachine: (functional-944676) Calling .GetSSHHostname
I0612 20:27:32.209151   32292 main.go:141] libmachine: (functional-944676) DBG | domain functional-944676 has defined MAC address 52:54:00:89:75:3d in network mk-functional-944676
I0612 20:27:32.209572   32292 main.go:141] libmachine: (functional-944676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:75:3d", ip: ""} in network mk-functional-944676: {Iface:virbr1 ExpiryTime:2024-06-12 21:24:39 +0000 UTC Type:0 Mac:52:54:00:89:75:3d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-944676 Clientid:01:52:54:00:89:75:3d}
I0612 20:27:32.209598   32292 main.go:141] libmachine: (functional-944676) DBG | domain functional-944676 has defined IP address 192.168.39.53 and MAC address 52:54:00:89:75:3d in network mk-functional-944676
I0612 20:27:32.209756   32292 main.go:141] libmachine: (functional-944676) Calling .GetSSHPort
I0612 20:27:32.209914   32292 main.go:141] libmachine: (functional-944676) Calling .GetSSHKeyPath
I0612 20:27:32.210055   32292 main.go:141] libmachine: (functional-944676) Calling .GetSSHUsername
I0612 20:27:32.210218   32292 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/functional-944676/id_rsa Username:docker}
I0612 20:27:32.307147   32292 ssh_runner.go:195] Run: sudo crictl images --output json
I0612 20:27:32.368206   32292 main.go:141] libmachine: Making call to close driver server
I0612 20:27:32.368227   32292 main.go:141] libmachine: (functional-944676) Calling .Close
I0612 20:27:32.368491   32292 main.go:141] libmachine: Successfully made call to close driver server
I0612 20:27:32.368509   32292 main.go:141] libmachine: Making call to close connection to plugin binary
I0612 20:27:32.368512   32292 main.go:141] libmachine: (functional-944676) DBG | Closing plugin on server side
I0612 20:27:32.368519   32292 main.go:141] libmachine: Making call to close driver server
I0612 20:27:32.368530   32292 main.go:141] libmachine: (functional-944676) Calling .Close
I0612 20:27:32.368745   32292 main.go:141] libmachine: Successfully made call to close driver server
I0612 20:27:32.368762   32292 main.go:141] libmachine: Making call to close connection to plugin binary
I0612 20:27:32.368766   32292 main.go:141] libmachine: (functional-944676) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944676 ssh pgrep buildkitd: exit status 1 (203.938698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image build -t localhost/my-image:functional-944676 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 image build -t localhost/my-image:functional-944676 testdata/build --alsologtostderr: (3.362641037s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944676 image build -t localhost/my-image:functional-944676 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 38402f8b6d7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-944676
--> b798f270bac
Successfully tagged localhost/my-image:functional-944676
b798f270bac1d8e963f3a8fdf99d6c31a55a9a5a4a40b5a510bfefdeab826ae8
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944676 image build -t localhost/my-image:functional-944676 testdata/build --alsologtostderr:
I0612 20:27:32.601662   32407 out.go:291] Setting OutFile to fd 1 ...
I0612 20:27:32.602001   32407 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 20:27:32.602010   32407 out.go:304] Setting ErrFile to fd 2...
I0612 20:27:32.602016   32407 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 20:27:32.602263   32407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
I0612 20:27:32.602966   32407 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0612 20:27:32.603546   32407 config.go:182] Loaded profile config "functional-944676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0612 20:27:32.603885   32407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0612 20:27:32.603935   32407 main.go:141] libmachine: Launching plugin server for driver kvm2
I0612 20:27:32.618406   32407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
I0612 20:27:32.618829   32407 main.go:141] libmachine: () Calling .GetVersion
I0612 20:27:32.619345   32407 main.go:141] libmachine: Using API Version  1
I0612 20:27:32.619368   32407 main.go:141] libmachine: () Calling .SetConfigRaw
I0612 20:27:32.619671   32407 main.go:141] libmachine: () Calling .GetMachineName
I0612 20:27:32.619865   32407 main.go:141] libmachine: (functional-944676) Calling .GetState
I0612 20:27:32.621774   32407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0612 20:27:32.621813   32407 main.go:141] libmachine: Launching plugin server for driver kvm2
I0612 20:27:32.637952   32407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44239
I0612 20:27:32.638363   32407 main.go:141] libmachine: () Calling .GetVersion
I0612 20:27:32.638919   32407 main.go:141] libmachine: Using API Version  1
I0612 20:27:32.638947   32407 main.go:141] libmachine: () Calling .SetConfigRaw
I0612 20:27:32.639272   32407 main.go:141] libmachine: () Calling .GetMachineName
I0612 20:27:32.639472   32407 main.go:141] libmachine: (functional-944676) Calling .DriverName
I0612 20:27:32.639686   32407 ssh_runner.go:195] Run: systemctl --version
I0612 20:27:32.639726   32407 main.go:141] libmachine: (functional-944676) Calling .GetSSHHostname
I0612 20:27:32.642513   32407 main.go:141] libmachine: (functional-944676) DBG | domain functional-944676 has defined MAC address 52:54:00:89:75:3d in network mk-functional-944676
I0612 20:27:32.642903   32407 main.go:141] libmachine: (functional-944676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:75:3d", ip: ""} in network mk-functional-944676: {Iface:virbr1 ExpiryTime:2024-06-12 21:24:39 +0000 UTC Type:0 Mac:52:54:00:89:75:3d Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-944676 Clientid:01:52:54:00:89:75:3d}
I0612 20:27:32.642970   32407 main.go:141] libmachine: (functional-944676) DBG | domain functional-944676 has defined IP address 192.168.39.53 and MAC address 52:54:00:89:75:3d in network mk-functional-944676
I0612 20:27:32.643279   32407 main.go:141] libmachine: (functional-944676) Calling .GetSSHPort
I0612 20:27:32.643448   32407 main.go:141] libmachine: (functional-944676) Calling .GetSSHKeyPath
I0612 20:27:32.643685   32407 main.go:141] libmachine: (functional-944676) Calling .GetSSHUsername
I0612 20:27:32.643821   32407 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/functional-944676/id_rsa Username:docker}
I0612 20:27:32.728160   32407 build_images.go:161] Building image from path: /tmp/build.3984074594.tar
I0612 20:27:32.728214   32407 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0612 20:27:32.739459   32407 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3984074594.tar
I0612 20:27:32.744284   32407 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3984074594.tar: stat -c "%s %y" /var/lib/minikube/build/build.3984074594.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3984074594.tar': No such file or directory
I0612 20:27:32.744315   32407 ssh_runner.go:362] scp /tmp/build.3984074594.tar --> /var/lib/minikube/build/build.3984074594.tar (3072 bytes)
I0612 20:27:32.787044   32407 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3984074594
I0612 20:27:32.804983   32407 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3984074594 -xf /var/lib/minikube/build/build.3984074594.tar
I0612 20:27:32.820122   32407 crio.go:315] Building image: /var/lib/minikube/build/build.3984074594
I0612 20:27:32.820199   32407 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-944676 /var/lib/minikube/build/build.3984074594 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0612 20:27:35.898733   32407 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-944676 /var/lib/minikube/build/build.3984074594 --cgroup-manager=cgroupfs: (3.078500863s)
I0612 20:27:35.898802   32407 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3984074594
I0612 20:27:35.910108   32407 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3984074594.tar
I0612 20:27:35.920410   32407 build_images.go:217] Built localhost/my-image:functional-944676 from /tmp/build.3984074594.tar
I0612 20:27:35.920442   32407 build_images.go:133] succeeded building to: functional-944676
I0612 20:27:35.920448   32407 build_images.go:134] failed building to: 
I0612 20:27:35.920490   32407 main.go:141] libmachine: Making call to close driver server
I0612 20:27:35.920505   32407 main.go:141] libmachine: (functional-944676) Calling .Close
I0612 20:27:35.920744   32407 main.go:141] libmachine: Successfully made call to close driver server
I0612 20:27:35.920757   32407 main.go:141] libmachine: Making call to close connection to plugin binary
I0612 20:27:35.920765   32407 main.go:141] libmachine: Making call to close driver server
I0612 20:27:35.920772   32407 main.go:141] libmachine: (functional-944676) Calling .Close
I0612 20:27:35.920789   32407 main.go:141] libmachine: (functional-944676) DBG | Closing plugin on server side
I0612 20:27:35.921086   32407 main.go:141] libmachine: Successfully made call to close driver server
I0612 20:27:35.921093   32407 main.go:141] libmachine: (functional-944676) DBG | Closing plugin on server side
I0612 20:27:35.921104   32407 main.go:141] libmachine: Making call to close connection to plugin binary
W0612 20:27:35.923670   32407 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 976dc540-b8a4-42f5-bf93-3105d14b6af5
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.977667221s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-944676
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image load --daemon gcr.io/google-containers/addon-resizer:functional-944676 --alsologtostderr
2024/06/12 20:27:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 image load --daemon gcr.io/google-containers/addon-resizer:functional-944676 --alsologtostderr: (4.888650405s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image load --daemon gcr.io/google-containers/addon-resizer:functional-944676 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 image load --daemon gcr.io/google-containers/addon-resizer:functional-944676 --alsologtostderr: (2.686948465s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.039769675s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-944676
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image load --daemon gcr.io/google-containers/addon-resizer:functional-944676 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 image load --daemon gcr.io/google-containers/addon-resizer:functional-944676 --alsologtostderr: (4.376924751s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image save gcr.io/google-containers/addon-resizer:functional-944676 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 image save gcr.io/google-containers/addon-resizer:functional-944676 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.524630468s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (6.444275296s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-944676
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-944676 image save --daemon gcr.io/google-containers/addon-resizer:functional-944676 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-944676 image save --daemon gcr.io/google-containers/addon-resizer:functional-944676 --alsologtostderr: (1.761027898s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-944676
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-944676
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-944676
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-944676
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (256.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-844626 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0612 20:29:56.707417   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:30:24.389808   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 20:31:48.613241   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:31:48.618591   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:31:48.628857   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:31:48.649127   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:31:48.689495   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:31:48.769820   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:31:48.930133   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:31:49.250800   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:31:49.891216   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:31:51.171564   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:31:53.732551   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-844626 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m16.066591868s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (256.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- rollout status deployment/busybox
E0612 20:31:58.852970   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-844626 -- rollout status deployment/busybox: (4.17467458s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-bdzsx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-bh59q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-dhw8h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-bdzsx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-bh59q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-dhw8h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-bdzsx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-bh59q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-dhw8h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-bdzsx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-bdzsx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-bh59q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-bh59q -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-dhw8h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-844626 -- exec busybox-fc5497c4f-dhw8h -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-844626 -v=7 --alsologtostderr
E0612 20:32:09.093267   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:32:29.574079   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-844626 -v=7 --alsologtostderr: (46.566018025s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-844626 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp testdata/cp-test.txt ha-844626:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile43944605/001/cp-test_ha-844626.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626:/home/docker/cp-test.txt ha-844626-m02:/home/docker/cp-test_ha-844626_ha-844626-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m02 "sudo cat /home/docker/cp-test_ha-844626_ha-844626-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626:/home/docker/cp-test.txt ha-844626-m03:/home/docker/cp-test_ha-844626_ha-844626-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m03 "sudo cat /home/docker/cp-test_ha-844626_ha-844626-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626:/home/docker/cp-test.txt ha-844626-m04:/home/docker/cp-test_ha-844626_ha-844626-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m04 "sudo cat /home/docker/cp-test_ha-844626_ha-844626-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp testdata/cp-test.txt ha-844626-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile43944605/001/cp-test_ha-844626-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m02:/home/docker/cp-test.txt ha-844626:/home/docker/cp-test_ha-844626-m02_ha-844626.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626 "sudo cat /home/docker/cp-test_ha-844626-m02_ha-844626.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m02:/home/docker/cp-test.txt ha-844626-m03:/home/docker/cp-test_ha-844626-m02_ha-844626-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m03 "sudo cat /home/docker/cp-test_ha-844626-m02_ha-844626-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m02:/home/docker/cp-test.txt ha-844626-m04:/home/docker/cp-test_ha-844626-m02_ha-844626-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m04 "sudo cat /home/docker/cp-test_ha-844626-m02_ha-844626-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp testdata/cp-test.txt ha-844626-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile43944605/001/cp-test_ha-844626-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt ha-844626:/home/docker/cp-test_ha-844626-m03_ha-844626.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626 "sudo cat /home/docker/cp-test_ha-844626-m03_ha-844626.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt ha-844626-m02:/home/docker/cp-test_ha-844626-m03_ha-844626-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m02 "sudo cat /home/docker/cp-test_ha-844626-m03_ha-844626-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m03:/home/docker/cp-test.txt ha-844626-m04:/home/docker/cp-test_ha-844626-m03_ha-844626-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m04 "sudo cat /home/docker/cp-test_ha-844626-m03_ha-844626-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp testdata/cp-test.txt ha-844626-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile43944605/001/cp-test_ha-844626-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt ha-844626:/home/docker/cp-test_ha-844626-m04_ha-844626.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626 "sudo cat /home/docker/cp-test_ha-844626-m04_ha-844626.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt ha-844626-m02:/home/docker/cp-test_ha-844626-m04_ha-844626-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m02 "sudo cat /home/docker/cp-test_ha-844626-m04_ha-844626-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 cp ha-844626-m04:/home/docker/cp-test.txt ha-844626-m03:/home/docker/cp-test_ha-844626-m04_ha-844626-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 ssh -n ha-844626-m03 "sudo cat /home/docker/cp-test_ha-844626-m04_ha-844626-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.465467447s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-844626 node delete m03 -v=7 --alsologtostderr: (17.53853363s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (342.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-844626 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0612 20:46:48.613342   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:48:11.657945   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 20:49:56.704409   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-844626 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m42.06063365s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (342.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-844626 --control-plane -v=7 --alsologtostderr
E0612 20:51:48.612930   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-844626 --control-plane -v=7 --alsologtostderr: (1m14.353465666s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-844626 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (97.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-123062 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-123062 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.064036615s)
--- PASS: TestJSONOutput/start/Command (97.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-123062 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-123062 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-123062 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-123062 --output=json --user=testUser: (7.37772612s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-194657 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-194657 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.003143ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3edbb72e-281d-452b-befa-9663a08c6fa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-194657] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c988a41-f4d5-4cfb-a9cd-353bee6ebe43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17779"}}
	{"specversion":"1.0","id":"88ce69e8-ecd5-4d1a-a11d-d4ef9ba32a95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"85dc3371-2126-4024-ae87-832d50a2c042","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig"}}
	{"specversion":"1.0","id":"04c5edfd-fca9-4e33-b217-9644da104382","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube"}}
	{"specversion":"1.0","id":"e35b8bc1-fcca-4bad-8d15-8500249d3295","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e7cdd4c2-e377-4541-afc3-e1efdd29080f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7d901f7d-412c-4107-a165-27f33bce9b2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-194657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-194657
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (88.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-505218 --driver=kvm2  --container-runtime=crio
E0612 20:54:56.705244   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-505218 --driver=kvm2  --container-runtime=crio: (43.091769249s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-507596 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-507596 --driver=kvm2  --container-runtime=crio: (42.860238153s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-505218
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-507596
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-507596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-507596
helpers_test.go:175: Cleaning up "first-505218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-505218
--- PASS: TestMinikubeProfile (88.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-512625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-512625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.632135476s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-512625 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-512625 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-526625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0612 20:56:48.613012   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-526625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.581386122s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-526625 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-526625 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-512625 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-526625 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-526625 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-526625
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-526625: (1.27640145s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.48s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-526625
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-526625: (19.477589203s)
--- PASS: TestMountStart/serial/RestartStopped (20.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-526625 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-526625 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-991051 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0612 20:57:59.751479   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-991051 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.252635037s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-991051 -- rollout status deployment/busybox: (4.99265831s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- exec busybox-fc5497c4f-846cm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- exec busybox-fc5497c4f-mwfld -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- exec busybox-fc5497c4f-846cm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- exec busybox-fc5497c4f-mwfld -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- exec busybox-fc5497c4f-846cm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- exec busybox-fc5497c4f-mwfld -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- exec busybox-fc5497c4f-846cm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- exec busybox-fc5497c4f-846cm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- exec busybox-fc5497c4f-mwfld -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-991051 -- exec busybox-fc5497c4f-mwfld -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-991051 -v 3 --alsologtostderr
E0612 20:59:56.704920   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-991051 -v 3 --alsologtostderr: (40.542221884s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.10s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-991051 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp testdata/cp-test.txt multinode-991051:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp multinode-991051:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile839762677/001/cp-test_multinode-991051.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp multinode-991051:/home/docker/cp-test.txt multinode-991051-m02:/home/docker/cp-test_multinode-991051_multinode-991051-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m02 "sudo cat /home/docker/cp-test_multinode-991051_multinode-991051-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp multinode-991051:/home/docker/cp-test.txt multinode-991051-m03:/home/docker/cp-test_multinode-991051_multinode-991051-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m03 "sudo cat /home/docker/cp-test_multinode-991051_multinode-991051-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp testdata/cp-test.txt multinode-991051-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp multinode-991051-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile839762677/001/cp-test_multinode-991051-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp multinode-991051-m02:/home/docker/cp-test.txt multinode-991051:/home/docker/cp-test_multinode-991051-m02_multinode-991051.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051 "sudo cat /home/docker/cp-test_multinode-991051-m02_multinode-991051.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp multinode-991051-m02:/home/docker/cp-test.txt multinode-991051-m03:/home/docker/cp-test_multinode-991051-m02_multinode-991051-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m03 "sudo cat /home/docker/cp-test_multinode-991051-m02_multinode-991051-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp testdata/cp-test.txt multinode-991051-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp multinode-991051-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile839762677/001/cp-test_multinode-991051-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp multinode-991051-m03:/home/docker/cp-test.txt multinode-991051:/home/docker/cp-test_multinode-991051-m03_multinode-991051.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051 "sudo cat /home/docker/cp-test_multinode-991051-m03_multinode-991051.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 cp multinode-991051-m03:/home/docker/cp-test.txt multinode-991051-m02:/home/docker/cp-test_multinode-991051-m03_multinode-991051-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 ssh -n multinode-991051-m02 "sudo cat /home/docker/cp-test_multinode-991051-m03_multinode-991051-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-991051 node stop m03: (1.550706314s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-991051 status: exit status 7 (406.837725ms)

                                                
                                                
-- stdout --
	multinode-991051
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-991051-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-991051-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-991051 status --alsologtostderr: exit status 7 (416.539533ms)

                                                
                                                
-- stdout --
	multinode-991051
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-991051-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-991051-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 21:00:09.228067   50086 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:00:09.228169   50086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:00:09.228177   50086 out.go:304] Setting ErrFile to fd 2...
	I0612 21:00:09.228181   50086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:00:09.228398   50086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:00:09.228581   50086 out.go:298] Setting JSON to false
	I0612 21:00:09.228605   50086 mustload.go:65] Loading cluster: multinode-991051
	I0612 21:00:09.228711   50086 notify.go:220] Checking for updates...
	I0612 21:00:09.229059   50086 config.go:182] Loaded profile config "multinode-991051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:00:09.229078   50086 status.go:255] checking status of multinode-991051 ...
	I0612 21:00:09.229549   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:00:09.229588   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:00:09.248821   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0612 21:00:09.249196   50086 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:00:09.249715   50086 main.go:141] libmachine: Using API Version  1
	I0612 21:00:09.249743   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:00:09.250126   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:00:09.250296   50086 main.go:141] libmachine: (multinode-991051) Calling .GetState
	I0612 21:00:09.251792   50086 status.go:330] multinode-991051 host status = "Running" (err=<nil>)
	I0612 21:00:09.251809   50086 host.go:66] Checking if "multinode-991051" exists ...
	I0612 21:00:09.252122   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:00:09.252155   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:00:09.267355   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40281
	I0612 21:00:09.267763   50086 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:00:09.268184   50086 main.go:141] libmachine: Using API Version  1
	I0612 21:00:09.268205   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:00:09.268472   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:00:09.268651   50086 main.go:141] libmachine: (multinode-991051) Calling .GetIP
	I0612 21:00:09.271387   50086 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:00:09.271864   50086 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:00:09.271907   50086 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:00:09.272007   50086 host.go:66] Checking if "multinode-991051" exists ...
	I0612 21:00:09.272380   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:00:09.272425   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:00:09.287501   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36533
	I0612 21:00:09.287893   50086 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:00:09.288347   50086 main.go:141] libmachine: Using API Version  1
	I0612 21:00:09.288367   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:00:09.288684   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:00:09.288871   50086 main.go:141] libmachine: (multinode-991051) Calling .DriverName
	I0612 21:00:09.289074   50086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 21:00:09.289106   50086 main.go:141] libmachine: (multinode-991051) Calling .GetSSHHostname
	I0612 21:00:09.291714   50086 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:00:09.292114   50086 main.go:141] libmachine: (multinode-991051) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:cc:62", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:57:46 +0000 UTC Type:0 Mac:52:54:00:24:cc:62 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-991051 Clientid:01:52:54:00:24:cc:62}
	I0612 21:00:09.292134   50086 main.go:141] libmachine: (multinode-991051) DBG | domain multinode-991051 has defined IP address 192.168.39.222 and MAC address 52:54:00:24:cc:62 in network mk-multinode-991051
	I0612 21:00:09.292303   50086 main.go:141] libmachine: (multinode-991051) Calling .GetSSHPort
	I0612 21:00:09.292459   50086 main.go:141] libmachine: (multinode-991051) Calling .GetSSHKeyPath
	I0612 21:00:09.292673   50086 main.go:141] libmachine: (multinode-991051) Calling .GetSSHUsername
	I0612 21:00:09.292858   50086 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/multinode-991051/id_rsa Username:docker}
	I0612 21:00:09.370695   50086 ssh_runner.go:195] Run: systemctl --version
	I0612 21:00:09.376860   50086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:00:09.391921   50086 kubeconfig.go:125] found "multinode-991051" server: "https://192.168.39.222:8443"
	I0612 21:00:09.391952   50086 api_server.go:166] Checking apiserver status ...
	I0612 21:00:09.391987   50086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 21:00:09.410940   50086 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1157/cgroup
	W0612 21:00:09.420540   50086 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1157/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 21:00:09.420601   50086 ssh_runner.go:195] Run: ls
	I0612 21:00:09.426926   50086 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I0612 21:00:09.431106   50086 api_server.go:279] https://192.168.39.222:8443/healthz returned 200:
	ok
	I0612 21:00:09.431127   50086 status.go:422] multinode-991051 apiserver status = Running (err=<nil>)
	I0612 21:00:09.431135   50086 status.go:257] multinode-991051 status: &{Name:multinode-991051 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 21:00:09.431150   50086 status.go:255] checking status of multinode-991051-m02 ...
	I0612 21:00:09.431452   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:00:09.431488   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:00:09.446321   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46329
	I0612 21:00:09.446675   50086 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:00:09.447114   50086 main.go:141] libmachine: Using API Version  1
	I0612 21:00:09.447148   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:00:09.447493   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:00:09.447682   50086 main.go:141] libmachine: (multinode-991051-m02) Calling .GetState
	I0612 21:00:09.449258   50086 status.go:330] multinode-991051-m02 host status = "Running" (err=<nil>)
	I0612 21:00:09.449270   50086 host.go:66] Checking if "multinode-991051-m02" exists ...
	I0612 21:00:09.449637   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:00:09.449679   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:00:09.464187   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0612 21:00:09.464572   50086 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:00:09.464977   50086 main.go:141] libmachine: Using API Version  1
	I0612 21:00:09.464998   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:00:09.465268   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:00:09.465426   50086 main.go:141] libmachine: (multinode-991051-m02) Calling .GetIP
	I0612 21:00:09.468177   50086 main.go:141] libmachine: (multinode-991051-m02) DBG | domain multinode-991051-m02 has defined MAC address 52:54:00:c6:8a:e8 in network mk-multinode-991051
	I0612 21:00:09.468530   50086 main.go:141] libmachine: (multinode-991051-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:8a:e8", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:58:45 +0000 UTC Type:0 Mac:52:54:00:c6:8a:e8 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:multinode-991051-m02 Clientid:01:52:54:00:c6:8a:e8}
	I0612 21:00:09.468556   50086 main.go:141] libmachine: (multinode-991051-m02) DBG | domain multinode-991051-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:c6:8a:e8 in network mk-multinode-991051
	I0612 21:00:09.468708   50086 host.go:66] Checking if "multinode-991051-m02" exists ...
	I0612 21:00:09.468989   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:00:09.469019   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:00:09.483427   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0612 21:00:09.483813   50086 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:00:09.484252   50086 main.go:141] libmachine: Using API Version  1
	I0612 21:00:09.484268   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:00:09.484514   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:00:09.484686   50086 main.go:141] libmachine: (multinode-991051-m02) Calling .DriverName
	I0612 21:00:09.484915   50086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 21:00:09.484938   50086 main.go:141] libmachine: (multinode-991051-m02) Calling .GetSSHHostname
	I0612 21:00:09.487652   50086 main.go:141] libmachine: (multinode-991051-m02) DBG | domain multinode-991051-m02 has defined MAC address 52:54:00:c6:8a:e8 in network mk-multinode-991051
	I0612 21:00:09.488187   50086 main.go:141] libmachine: (multinode-991051-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:8a:e8", ip: ""} in network mk-multinode-991051: {Iface:virbr1 ExpiryTime:2024-06-12 21:58:45 +0000 UTC Type:0 Mac:52:54:00:c6:8a:e8 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:multinode-991051-m02 Clientid:01:52:54:00:c6:8a:e8}
	I0612 21:00:09.488218   50086 main.go:141] libmachine: (multinode-991051-m02) DBG | domain multinode-991051-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:c6:8a:e8 in network mk-multinode-991051
	I0612 21:00:09.488399   50086 main.go:141] libmachine: (multinode-991051-m02) Calling .GetSSHPort
	I0612 21:00:09.488553   50086 main.go:141] libmachine: (multinode-991051-m02) Calling .GetSSHKeyPath
	I0612 21:00:09.488688   50086 main.go:141] libmachine: (multinode-991051-m02) Calling .GetSSHUsername
	I0612 21:00:09.488810   50086 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17779-14199/.minikube/machines/multinode-991051-m02/id_rsa Username:docker}
	I0612 21:00:09.569899   50086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 21:00:09.585317   50086 status.go:257] multinode-991051-m02 status: &{Name:multinode-991051-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0612 21:00:09.585353   50086 status.go:255] checking status of multinode-991051-m03 ...
	I0612 21:00:09.585718   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0612 21:00:09.585758   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0612 21:00:09.601181   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0612 21:00:09.601589   50086 main.go:141] libmachine: () Calling .GetVersion
	I0612 21:00:09.601989   50086 main.go:141] libmachine: Using API Version  1
	I0612 21:00:09.602011   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0612 21:00:09.602300   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0612 21:00:09.602509   50086 main.go:141] libmachine: (multinode-991051-m03) Calling .GetState
	I0612 21:00:09.604047   50086 status.go:330] multinode-991051-m03 host status = "Stopped" (err=<nil>)
	I0612 21:00:09.604060   50086 status.go:343] host is not running, skipping remaining checks
	I0612 21:00:09.604066   50086 status.go:257] multinode-991051-m03 status: &{Name:multinode-991051-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-991051 node start m03 -v=7 --alsologtostderr: (28.561204226s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-991051 node delete m03: (1.709667441s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (172.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-991051 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0612 21:09:56.707981   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-991051 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m51.52603675s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-991051 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (172.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-991051
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-991051-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-991051-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.174753ms)

                                                
                                                
-- stdout --
	* [multinode-991051-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-991051-m02' is duplicated with machine name 'multinode-991051-m02' in profile 'multinode-991051'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-991051-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-991051-m03 --driver=kvm2  --container-runtime=crio: (41.094900591s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-991051
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-991051: exit status 80 (207.324495ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-991051 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-991051-m03 already exists in multinode-991051-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-991051-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.15s)

                                                
                                    
x
+
TestScheduledStopUnix (115.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-254112 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-254112 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.069901326s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-254112 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-254112 -n scheduled-stop-254112
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-254112 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-254112 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-254112 -n scheduled-stop-254112
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-254112
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-254112 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0612 21:16:48.613472   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-254112
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-254112: exit status 7 (64.572525ms)

                                                
                                                
-- stdout --
	scheduled-stop-254112
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-254112 -n scheduled-stop-254112
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-254112 -n scheduled-stop-254112: exit status 7 (59.661463ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-254112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-254112
--- PASS: TestScheduledStopUnix (115.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (246.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2511354780 start -p running-upgrade-719458 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2511354780 start -p running-upgrade-719458 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.230352076s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-719458 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0612 21:19:56.704150   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-719458 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m54.217506304s)
helpers_test.go:175: Cleaning up "running-upgrade-719458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-719458
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-719458: (1.172002215s)
--- PASS: TestRunningBinaryUpgrade (246.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-721096 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-721096 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (77.491077ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-721096] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-721096 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-721096 --driver=kvm2  --container-runtime=crio: (1m36.844628722s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-721096 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-701638 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-701638 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (95.951991ms)

                                                
                                                
-- stdout --
	* [false-701638] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0612 21:18:27.422820   58784 out.go:291] Setting OutFile to fd 1 ...
	I0612 21:18:27.422938   58784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:18:27.422948   58784 out.go:304] Setting ErrFile to fd 2...
	I0612 21:18:27.422953   58784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 21:18:27.423127   58784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17779-14199/.minikube/bin
	I0612 21:18:27.423694   58784 out.go:298] Setting JSON to false
	I0612 21:18:27.425014   58784 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7252,"bootTime":1718219855,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0612 21:18:27.425123   58784 start.go:139] virtualization: kvm guest
	I0612 21:18:27.427022   58784 out.go:177] * [false-701638] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0612 21:18:27.428466   58784 out.go:177]   - MINIKUBE_LOCATION=17779
	I0612 21:18:27.428466   58784 notify.go:220] Checking for updates...
	I0612 21:18:27.429794   58784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 21:18:27.431194   58784 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17779-14199/kubeconfig
	I0612 21:18:27.432585   58784 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17779-14199/.minikube
	I0612 21:18:27.433827   58784 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0612 21:18:27.434980   58784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 21:18:27.436584   58784 config.go:182] Loaded profile config "NoKubernetes-721096": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0612 21:18:27.436708   58784 config.go:182] Loaded profile config "kubernetes-upgrade-724108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0612 21:18:27.436806   58784 config.go:182] Loaded profile config "running-upgrade-719458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0612 21:18:27.436904   58784 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 21:18:27.471636   58784 out.go:177] * Using the kvm2 driver based on user configuration
	I0612 21:18:27.473104   58784 start.go:297] selected driver: kvm2
	I0612 21:18:27.473129   58784 start.go:901] validating driver "kvm2" against <nil>
	I0612 21:18:27.473161   58784 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 21:18:27.475409   58784 out.go:177] 
	W0612 21:18:27.476804   58784 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0612 21:18:27.478273   58784 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-701638 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-701638" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-701638" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-701638

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-701638"

                                                
                                                
----------------------- debugLogs end: false-701638 [took: 2.498821055s] --------------------------------
helpers_test.go:175: Cleaning up "false-701638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-701638
--- PASS: TestNetworkPlugins/group/false (2.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-721096 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-721096 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.271472008s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-721096 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-721096 status -o json: exit status 2 (231.836409ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-721096","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-721096
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-721096: (1.019537445s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-721096 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-721096 --no-kubernetes --driver=kvm2  --container-runtime=crio: (50.506029188s)
--- PASS: TestNoKubernetes/serial/Start (50.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-721096 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-721096 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.765685ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.39966677s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-721096
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-721096: (1.471172592s)
--- PASS: TestNoKubernetes/serial/Stop (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-721096 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-721096 --driver=kvm2  --container-runtime=crio: (44.198029433s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-721096 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-721096 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.21083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/Start (94.11s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-037058 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0612 21:21:31.661502   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
E0612 21:21:48.613910   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/functional-944676/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-037058 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m34.113912048s)
--- PASS: TestPause/serial/Start (94.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (121.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4231152011 start -p stopped-upgrade-776864 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4231152011 start -p stopped-upgrade-776864 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m14.335851645s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4231152011 -p stopped-upgrade-776864 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4231152011 -p stopped-upgrade-776864 stop: (2.137571168s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-776864 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-776864 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.314007157s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (121.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m12.887778876s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (96.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m36.109743301s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (96.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-776864
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (119.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m59.516700832s)
--- PASS: TestNetworkPlugins/group/calico/Start (119.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-701638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-701638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zqgf7" [a7f6fc98-6c5d-499c-b1d2-07802cf89de1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zqgf7" [a7f6fc98-6c5d-499c-b1d2-07802cf89de1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004887945s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-701638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m31.821597437s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-d5skb" [a0d570a0-28aa-400b-bb03-83ac6e5646dc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.014822638s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-701638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-701638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jpmc6" [0bacc671-2d5c-460c-bdee-d28770be91a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jpmc6" [0bacc671-2d5c-460c-bdee-d28770be91a3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003947181s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-701638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (92.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m32.387151607s)
--- PASS: TestNetworkPlugins/group/flannel/Start (92.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (123.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m3.571636054s)
--- PASS: TestNetworkPlugins/group/bridge/Start (123.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6cq47" [c089e920-c28e-4d0d-8191-8aeb24b8f73f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007176386s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-701638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-701638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qx62h" [786fd093-af76-4a90-83c9-f14c360a2478] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qx62h" [786fd093-af76-4a90-83c9-f14c360a2478] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.004725878s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-701638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-701638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-701638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m9.023412254s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-701638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-701638 replace --force -f testdata/netcat-deployment.yaml: (1.193706003s)
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zmtgl" [8b80713b-c86f-4d0c-9c2e-7fba7a14c80b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zmtgl" [8b80713b-c86f-4d0c-9c2e-7fba7a14c80b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005660227s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-701638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9ll8d" [b00bbbd1-bae2-4804-b0d7-c1aecaa4f5a7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005071301s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-701638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-701638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zgcp2" [e43ddd5a-0f54-4637-9b2e-457f395d0b67] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zgcp2" [e43ddd5a-0f54-4637-9b2e-457f395d0b67] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004587256s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-701638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (119.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-087875 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-087875 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m59.809775095s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (119.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-701638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-701638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-701638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6ggl6" [a0d555db-0ff6-4a60-8816-65005a32c8c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6ggl6" [a0d555db-0ff6-4a60-8816-65005a32c8c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004188915s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-701638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-68x75" [74052e90-64b9-4743-af41-cfc9ebbec9a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-68x75" [74052e90-64b9-4743-af41-cfc9ebbec9a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004286149s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-701638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-701638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-701638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)
E0612 21:58:14.295163   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:58:14.422673   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (100.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-591460 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-591460 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m40.021283803s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (100.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-376087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0612 21:29:56.704478   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/addons-899843/client.crt: no such file or directory
E0612 21:30:04.133724   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:30:04.139013   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:30:04.149611   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:30:04.169957   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:30:04.210304   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:30:04.291268   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-376087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m23.823430293s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-087875 create -f testdata/busybox.yaml
E0612 21:30:04.451438   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [07f870b7-b7d0-4af9-85c6-43ad97de9791] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0612 21:30:04.772369   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:30:05.413521   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
E0612 21:30:06.694113   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
helpers_test.go:344: "busybox" [07f870b7-b7d0-4af9-85c6-43ad97de9791] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004747033s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-087875 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-376087 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4d9ff0c0-b2e4-4535-b3e5-3cd361febf51] Pending
helpers_test.go:344: "busybox" [4d9ff0c0-b2e4-4535-b3e5-3cd361febf51] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0612 21:30:09.254301   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
helpers_test.go:344: "busybox" [4d9ff0c0-b2e4-4535-b3e5-3cd361febf51] Running
E0612 21:30:14.375289   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004045195s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-376087 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-087875 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-087875 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-376087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-376087 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-591460 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cef59c25-98ac-4ab7-9407-d5b57de61062] Pending
helpers_test.go:344: "busybox" [cef59c25-98ac-4ab7-9407-d5b57de61062] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0612 21:30:24.616299   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/auto-701638/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cef59c25-98ac-4ab7-9407-d5b57de61062] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00483552s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-591460 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-591460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-591460 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (695.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-087875 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-087875 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (11m35.165353373s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-087875 -n no-preload-087875
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (695.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (575.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-376087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-376087 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (9m35.249619703s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-376087 -n default-k8s-diff-port-376087
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (575.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (625.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-591460 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0612 21:33:10.459667   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
E0612 21:33:14.294829   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:14.300088   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:14.310356   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:14.330672   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:14.371363   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:14.422741   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:14.428050   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:14.438332   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:14.451499   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:14.458681   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:14.498955   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:14.579269   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:14.612531   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:14.740021   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:14.933371   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:15.060755   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:15.574483   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:15.701807   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:16.854775   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:16.982005   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:18.185695   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/kindnet-701638/client.crt: no such file or directory
E0612 21:33:19.415316   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:19.542900   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:24.536172   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:24.663513   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
E0612 21:33:28.211216   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/custom-flannel-701638/client.crt: no such file or directory
E0612 21:33:34.776364   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/enable-default-cni-701638/client.crt: no such file or directory
E0612 21:33:34.904731   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/bridge-701638/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-591460 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (10m25.242945581s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-591460 -n embed-certs-591460
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (625.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-983302 --alsologtostderr -v=3
E0612 21:33:51.420608   21444 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17779-14199/.minikube/profiles/flannel-701638/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-983302 --alsologtostderr -v=3: (1.38599512s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-983302 -n old-k8s-version-983302: exit status 7 (63.867838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-983302 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (56.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-007396 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-007396 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (56.454013739s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (56.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-007396 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-007396 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.038981303s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-007396 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-007396 --alsologtostderr -v=3: (7.35816171s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-007396 -n newest-cni-007396
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-007396 -n newest-cni-007396: exit status 7 (88.689163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-007396 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-007396 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-007396 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (35.977379212s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-007396 -n newest-cni-007396
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-007396 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-007396 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-007396 -n newest-cni-007396
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-007396 -n newest-cni-007396: exit status 2 (236.338614ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-007396 -n newest-cni-007396
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-007396 -n newest-cni-007396: exit status 2 (232.360666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-007396 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-007396 -n newest-cni-007396
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-007396 -n newest-cni-007396
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.38s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.1/cached-images 0
15 TestDownloadOnly/v1.30.1/binaries 0
16 TestDownloadOnly/v1.30.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
41 TestAddons/parallel/Volcano 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
255 TestNetworkPlugins/group/kubenet 2.64
263 TestNetworkPlugins/group/cilium 2.97
276 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-701638 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-701638" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-701638" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-701638

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-701638"

                                                
                                                
----------------------- debugLogs end: kubenet-701638 [took: 2.501703014s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-701638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-701638
--- SKIP: TestNetworkPlugins/group/kubenet (2.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-701638 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-701638" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-701638

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-701638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-701638"

                                                
                                                
----------------------- debugLogs end: cilium-701638 [took: 2.838209377s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-701638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-701638
--- SKIP: TestNetworkPlugins/group/cilium (2.97s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-576552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-576552
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard